[ask for advice]: how to use python to crawl the content below tr,td in the network

ask the gods, how can I climb out of the last 2.1610 in the gray part of the html of this web page?

and I have a series of html pages that are highly similar to this page. I want to climb out of this string of numbers in the same position. How should I use beautifulsoup to complete my code?

now my code is as follows (the comment section uses the code of the first respondent):

def getLinks(articleUrl):
    html=urlopen(articleUrl)
    -sharps = "<tr><td><b><a href=".././statistics/power" title="Exponent of the power-law degree distibution">Power law exponent (estimated) with d<sub>min</sub></a></b></td><td>2.1610(d<sub>min</sub> = 2) </td></tr>"
    -sharpsoup = BeautifulSoup(s, "html.parser")
    -sharpprint(soup.find_all("td")[1].contents[0][:-2])

Web page parsing of

Python generally has the following methods:
1. python.org/3/library/stdtypes.html-sharpstring-methods" rel=" nofollow noreferrer "> string method
2. python.org/3/library/re.html" rel= "nofollow noreferrer" > regular expression
3.html/xml text parsing library calls (such as the famous BeautifulSoup library )
for the example you gave, Suppose:

>>> s.split('<td>')[-1].split('(d')[0]
'2.1610'

2.re:

>>> import re
>>> pattern = re.compile('</b></td><td>(.*)\(d<sub>')
>>> pattern.findall(s)
['2.1610']

3.BeautifulSoup:

>>> from bs4 import BeautifulSoup
>>> soup = BeautifulSoup(s, 'html.parser')
>>> soup.find_all('td')[1].contents[0][:-2]
'2.1610'

all of the above methods are temporarily designed based on a given example.

Menu