Installation and Setup
To get started with LlamaIndex and Hyperbrowser, you can install the necessary packages using pip:HYPERBROWSER_API_KEY=<your-api-key>
You can get an API Key easily from the dashboard. Once you have your API Key, add it to your .env file as HYPERBROWSER_API_KEY or you can pass it via the api_key argument in the HyperbrowserWebReader constructor.
Usage
Once you have your API Key and have installed the packages you can load webpages into LlamaIndex usingHyperbrowserWebReader.
scrape. For scrape, you can provide a single URL or a list of URLs to be scraped. For crawl, you can only provide a single URL. The crawl operation will crawl the provided page and subpages and return a document for each page. HyperbrowserWebReader supports loading and lazy loading data in both sync and async modes.
params argument. For more information on the supported params, you can see the params on the scraping guide.
The params will be Snake case for python code, so here for example, it is 
max_pages instead of maxPages.