时间: 2021-07-31 作者:daque
<%@ page contenttype="text/html;charset=utf8"%><% string scurrentline; string stotalstring; scurrentline=""; stotalstring=""; java.io.inputstream l_urlstream; java.net.url l_url = new java.net.url("http://www.163.net/"); java.net.httpurlconnection l_connection = (java.net.httpurlconnection) l_url.openconnection(); l_connection.connect(); l_urlstream = l_connection.getinputstream(); java.io.bufferedreader l_reader = new java.io.bufferedreader(new java.io.inputstreamreader(l_urlstream)); while ((scurrentline = l_reader.readline()) != null) { stotalstring+=scurrentline; } out.println(stotalstring); %> 跋文 固然代码比拟大略,然而,我觉得按照这个,不妨实行“搜集爬虫”的功效,比方从页面找href贯穿,而后再获得谁人贯穿,而后再“抓”,不遏止地(固然不妨控制层数),如许,不妨实行“网页探求”功效。