apify-js 正在参加 2021 年度 OSC 中国开源项目评选,请投票支持!
apify-js 在 2021 年度 OSC 中国开源项目评选 中已获得 {{ projectVoteCount }} 票,请投票支持!
2021 年度 OSC 中国开源项目评选 正在火热进行中,快来投票支持你喜欢的开源项目!
2021 年度 OSC 中国开源项目评选 >>> 中场回顾
apify-js 获得 2021 年度 OSC 中国开源项目评选「最佳人气项目」 !
授权协议 Apache-2.0 License
开发语言 JavaScript
操作系统 跨平台
软件类型 开源软件
开源组织
地区 不详
投 递 者 首席测试
适用人群 未知
收录时间 2021-12-02

软件简介

Apify SDK: The scalable web crawling and scraping library for JavaScript

npm version Downloads Chat on discord Build Status

Apify SDK simplifies the development of web crawlers, scrapers, data extractors and web automation jobs. It provides tools to manage and automatically scale a pool of headless browsers, to maintain queues of URLs to crawl, store crawling results to a local filesystem or into the cloud, rotate proxies and much more. The SDK is available as the apify NPM package. It can be used either stand-alone in your own applications or in actors running on the Apify Cloud.

View full documentation, guides and examples on the Apify SDK project website

Motivation

Thanks to tools like Playwright, Puppeteer or Cheerio, it is easy to write Node.js code to extract data from web pages. But eventually things will get complicated. For example, when you try to:

  • Perform a deep crawl of an entire website using a persistent queue of URLs.
  • Run your scraping code on a list of 100k URLs in a CSV file, without losing any data when your code crashes.
  • Rotate proxies to hide your browser origin and keep user-like sessions.
  • Disable browser fingerprinting protections used by websites.

Python has Scrapy for these tasks, but there was no such library for JavaScript, the language of the web. The use of JavaScript is natural, since the same language is used to write the scripts as well as the data extraction code running in a browser.

The goal of the Apify SDK is to fill this gap and provide a toolbox for generic web scraping, crawling and automation tasks in JavaScript. So don't reinvent the wheel every time you need data from the web, and focus on writing code specific to the target website, rather than developing commonalities.

Overview

The Apify SDK is available as the apify NPM package and it provides the following tools:

  • CheerioCrawler - Enables the parallel crawling of a large number of web pages using the cheerio HTML parser. This is the most efficient web crawler, but it does not work on websites that require JavaScript.

  • PuppeteerCrawler - Enables the parallel crawling of a large number of web pages using the headless Chrome browser and Puppeteer. The pool of Chrome browsers is automatically scaled up and down based on available system resources.

  • PlaywrightCrawler - Unlike PuppeteerCrawler you can use Playwright to manage almost any headless browser. It also provides a cleaner and more mature interface while keeping the ease of use and advanced features.

  • BasicCrawler - Provides a simple framework for the parallel crawling of web pages whose URLs are fed either from a static list or from a dynamic queue of URLs. This class serves as a base for the more specialized crawlers above.

  • RequestList - Represents a list of URLs to crawl. The URLs can be passed in code or in a text file hosted on the web. The list persists its state so that crawling can resume when the Node.js process restarts.

  • RequestQueue - Represents a queue of URLs to crawl, which is stored either on a local filesystem or in the Apify Cloud. The queue is used for deep crawling of websites, where you start with several URLs and then recursively follow links to other pages. The data structure supports both breadth-first and depth-first crawling orders.

  • Dataset - Provides a store for structured data and enables their export to formats like JSON, JSONL, CSV, XML, Excel or HTML. The data is stored on a local filesystem or in the Apify Cloud. Datasets are useful for storing and sharing large tabular crawling results, such as a list of products or real estate offers.

  • KeyValueStore - A simple key-value store for arbitrary data records or files, along with their MIME content type. It is ideal for saving screenshots of web pages, PDFs or to persist the state of your crawlers. The data is stored on a local filesystem or in the Apify Cloud.

  • AutoscaledPool - Runs asynchronous background tasks, while automatically adjusting the concurrency based on free system memory and CPU usage. This is useful for running web scraping tasks at the maximum capacity of the system.

  • Browser Utils - Provides several helper functions useful for web scraping. For example, to inject jQuery into web pages or to hide browser origin.

Additionally, the package provides various helper functions to simplify running your code on the Apify Cloud and thus take advantage of its pool of proxies, job scheduler, data storage, etc. For more information, see the Apify SDK Programmer's Reference.

Quick Start

This short tutorial will set you up to start using Apify SDK in a minute or two. If you want to learn more, proceed to the Getting Started tutorial that will take you step by step through creating your first scraper.

Local stand-alone usage

Apify SDK requires Node.js 15.10 or later. Add Apify SDK to any Node.js project by running:

npm install apify playwright

Neither playwright nor puppeteer are bundled with the SDK to reduce install size and allow greater flexibility. That's why we install it with NPM. You can choose one, both, or neither.

Run the following example to perform a recursive crawl of a website using Playwright. For more examples showcasing various features of the Apify SDK, see the Examples section of the documentation.

const Apify = require('apify');

// Apify.main is a helper function, you don't need to use it.
Apify.main(async () => {
    const requestQueue = await Apify.openRequestQueue();
    // Choose the first URL to open.
    await requestQueue.addRequest({ url: 'https://www.iana.org/' });

    const crawler = new Apify.PlaywrightCrawler({
        requestQueue,
        handlePageFunction: async ({ request, page }) => {
            // Extract HTML title of the page.
            const title = await page.title();
            console.log(`Title of ${request.url}: ${title}`);

            // Add URLs that match the provided pattern.
            await Apify.utils.enqueueLinks({
                page,
                requestQueue,
                pseudoUrls: ['https://www.iana.org/[.*]'],
            });
        },
    });

    await crawler.run();
});

When you run the example, you should see Apify SDK automating a Chrome browser.

Chrome Scrape

By default, Apify SDK stores data to ./apify_storage in the current working directory. You can override this behavior by setting either the APIFY_LOCAL_STORAGE_DIR or APIFY_TOKEN environment variable. For details, see Environment variables, Request storage and Result storage.

Local usage with Apify command-line interface (CLI)

To avoid the need to set the environment variables manually, to create a boilerplate of your project, and to enable pushing and running your code on the Apify platform, you can use the Apify command-line interface (CLI) tool.

Install the CLI by running:

npm -g install apify-cli

Now create a boilerplate of your new web crawling project by running:

apify create my-hello-world

The CLI will prompt you to select a project boilerplate template - just pick "Hello world". The tool will create a directory called my-hello-world with a Node.js project files. You can run the project as follows:

cd my-hello-world
apify run

By default, the crawling data will be stored in a local directory at ./apify_storage. For example, the input JSON file for the actor is expected to be in the default key-value store in ./apify_storage/key_value_stores/default/INPUT.json.

Now you can easily deploy your code to the Apify platform by running:

apify login
apify push

Your script will be uploaded to the Apify platform and built there so that it can be run. For more information, view the Apify Actor documentation.

Usage on the Apify platform

You can also develop your web scraping project in an online code editor directly on the Apify platform. You'll need to have an Apify Account. Go to Actors, page in the app, click Create new and then go to the Source tab and start writing your code or paste one of the examples from the Examples section.

For more information, view the Apify actors quick start guide.

Support

If you find any bug or issue with the Apify SDK, please submit an issue on GitHub. For questions, you can ask on Stack Overflow or contact support@apify.com

Contributing

Your code contributions are welcome and you'll be praised to eternity! If you have any ideas for improvements, either submit an issue or create a pull request. For contribution guidelines and the code of conduct, see CONTRIBUTING.md.

License

This project is licensed under the Apache License 2.0 - see the LICENSE.md file for details.

Acknowledgments

Many thanks to Chema Balsas for giving up the apify package name on NPM and renaming his project to jsdocify.

展开阅读全文

代码

评论 (0)

加载中
更多评论
暂无内容
发表了博客
2018/06/27 15:45

[JavaScript]-JavaScript的this原理.

#### 一、问题的由来 学懂 JavaScript 语言,一个标志就是理解下面两种写法,可能有不一样的结果。 ```js var obj = { foo: function () {} }; var foo = obj.foo; // 写法一 obj.foo() // 写法二 foo() ``` 上面代码中,虽然obj.foo和foo指向同一个函数,但是执行结果可能不一样。请看下面的例子。 ```js var obj = { foo: function () { console.log(this.bar) }, bar: 1 }; var foo = obj.foo; var bar = 2; obj.foo() // 1...

3
26
发表了博客
2019/08/30 13:29

JavaScript(js)笔记

js注释 JavaScript注释与Java注释相同 // 单行注释 /* 多行注释 */ js五大基本类型:   number(数值型)、string(字符串性)、boolean(布尔型)、undefined类型、null类型 number包括:数值类型、包含整型、浮点型、NaN和Infinity(无穷大)。 关于数组的空间长度问题:   如何获取数组的空间长度:     通过length属性获取数组的空间长度   数组的length详解:     java:只读属性      js:读写属...

0
3
2014/11/07 14:33

js 封装 js

define(['jquery'], function($) { var orderSummary = { $el: $('.summary-totals'), updateUrl : '../checkout/orderSummaryContentJson.jsp', cartRemoveUpdateUrl : '../cartridges/cart/subTotalContainer.jsp', orderComfirmSummaryUrl : '../checkout/orderReviewInfoSummaryContainer.jsp', bindChange : function() { this.$el = $('.summary-totals'); if($('.js-estimated').length) { $('.js-estimated').off('chan...

0
0
发表了博客
2014/09/26 16:08

Javascript---Javascript简介

javascript 是一门面向对象的动态语言。虽然在字面中带有java的字样但是千万不能和java混淆。 javascript的主要运用在web开发中,做交互方面的开发让交互变得更加的有意思和人性化。 —————————————————————————————————————————————————————————————————————— 一、提出如下的问题; 1、ECMASCRIPT是什么? 2、javascript和ECMAScript之间的关系? --->EC...

0
1
发表了博客
2019/04/02 15:17

浅谈JS之text/javascript和application/javascript

问题描述: JS在IE8以下浏览器运行异常 代码: <script>标签是这样子写的: <script type="application/javascript"> //执行语句 </script> 这是书写的时候Dreamweaver自动补全的。但是在自己运行的例子中我的script标签写的是: <script type="text/javascript"> //执行语句 </script> 将项目中的标签type属性修改成了text-javascript,然后,然后在IE8一下就可以运行了。 拓展: 最实用的用法,现在各大浏览器已经...

0
0
没有更多内容
加载失败,请刷新页面
点击加载更多
加载中
下一页
暂无内容
0 评论
0 收藏
分享
OSCHINA
登录后可查看更多优质内容
返回顶部
顶部