The problem of python crawling KuGou top500, paging crawling

topic description

follow the book to climb KuGou Music top500
I crawled the idea is to first find all the pages, and then request all the pages, and their content parsed with beautifulsoup, and finally directly print, but reported wrong, I took a look at the train of thought should not be any problem ah? Ask all the gods for help.
report an error:
No connection adapters were found for"["from=rank" rel=" nofollow noreferrer "> http://www.kugou.com/yy/rank/."]"
my code is as follows:

related codes

import requests
from bs4 import BeautifulSoup
import time
headers = {
    "User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:61.0) Gecko/20100101 Firefox/61.0"
}-sharp
def get_info(url):         -sharp
    res = requests.get(url,headers =  headers)  -sharp
    soup = BeautifulSoup(res.text,"lxml")   -sharp
    -sharp:
    nums = soup.select(".pc_temp_songlist > ul:nth-of-type(1) > li > span:nth-of-type(3) > strong:nth-of-type(1)")
    -sharp-:
    titles = soup.select(".pc_temp_songlist > ul:nth-of-type(1) > li > a:nth-of-type(4)")
    -sharp:
    times = soup.select(".pc_temp_songlist > ul:nth-of-type(1) > li > span:nth-of-type(5) > span:nth-of-type(4)")
    for num,title,time in zip(nums,titles,times):
        data = {
            "":num.get_text().strip(),
            "":title.get("title").get_text().split("-")[0],
            "":prices.get("title").get_text().split("-")[1],
            "":address.get_text().strip(),
        }
        print(data)
        time.sleep(2)

    

main program


-sharp
urls = ["http://www.kugou.com/yy/rank/home/{}-8888.html?from=rank".format(number) for number in range(1,24)]  -sharp1-23
for single_url in urls:
    get_info(single_url)
    time.sleep(5)

error message

the main program directly stuck there without any information typed out, so I tried to crawl ["http://www.kugou.com/yy/rank/home/1-8888.html?from=rank"]] on the first page, and the result was wrong. It was strange that it didn"t connect. I can connect by clicking on the web page directly. The
code is as follows:

url = ["http://www.kugou.com/yy/rank/home/1-8888.html?from=rank"]
get_info(url)

the error is as follows:

No connection adapters were found for "["http://www.kugou.com/yy/rank/home/1-8888.html?from=rank"]"

Baidu reported this mistake and tried nothing to do about it, and there is less wrong content on Baidu. Please, everyone!

Apr.09,2021

nums = soup.select ('.pc _ temp_songlist > ul:nth-of-type (1) > li > span:nth-of-type (3) > strong:nth-of-type (1)')
titles = soup.select ('.pc _ temp_songlist > ul:nth-of-type (1) > li > a:nth-of-type (4)')
times = soup.select ('.pc _ temp_songlist > ul:nth-of-type (1) > li > span:nth- Of-type (5) > span:nth-of-type (4)')

there is a problem with parsing this data, so of course there is no printout.
you feel stuck, each cycle takes sleep 7 seconds, and the output is an illusion caused by emptiness.
the following code is for reference:
import requests
from bs4 import BeautifulSoup

url=' http://www.kugou.com/yy/rank/.{}-8888.html?from=rank'

def get_info (url):

res=requests.get(url)
soup=BeautifulSoup(res.text,'lxml')
infoes=soup.select('div.pc_temp_songlist ul li ')
for info in infoes:
    nums=info.select('span.pc_temp_num')[0].text.strip()
    singer,name=info['title'].split('-',1)
    times=info.select('span.pc_temp_tips_r span.pc_temp_time')[0].text.strip()
    print({'':nums,'':singer,'':name,'':times})

if _ _ name__=='__main__':

urls = [url.format(i) for i in range(1, 24)]
for url in urls:
    get_info(url)

Menu