Soup.find ('table') in python crawler cannot find content

beginners ask for advice. I want to get the content in this table.
crawl the website

the picture failed to upload, which is the table that starts here:

<table class="list-invecase">
                    <tbody>
                    <tr>
                        <td class="date">
                            <span class="verdana">2018-05-12</span>
         
       

this is what I wrote:

import urllib.request
import requests 
import re 
from bs4 import BeautifulSoup
headers = {"User-Agent": "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:59.0) Gecko/20100101 Firefox/59.0"} 
page_html = requests.get("https://www.itjuzi.com/investfirm/1", headers=headers)
Soup = BeautifulSoup(page_html.text, "lxml")
art_list = Soup.find("table")
art_list 

but the result is:

<table class="list-invecase">
<tbody></tbody>
</table>

I would like to ask how to get the date, name, industry and so on I want. I have just come into contact with this, and I don"t understand much. Please guide me. Thank you

.
Mar.11,2021

first of all, to climb this kind of site, you have to look at the source code of the page, not just right-click. Because request.get gets the source code of the web page, not the html you see after the browser embellishments.
you can see
clipboard.png
soup.findAll('table') 'table'
talbe class="verdana"spanclass="date"td
clipboard.png
jshtml
:https://www.itjuzi.com/invest...

clipboard.png

through the source code of the web page.

go to request.get the url, and parse it (you may need to use the json package) to get what you want

second, .find ('table') returns the first < table > tag after traversing all talbe tags, and only the first if you're not sure how many < table > tags there are. So you can use .findAll to see how many table tags there are, determine which one, or as mentioned above, add qualifications to it, such as Soup.find ('table',class_='list-invecase'), or Soup.find (' span',class_='verdana'), which is more convenient for you to find.

finally, suggest the official document, https://www.crummy.com/softwa.

Menu