AutoAuto® Drive your Wonder!

Maibot

First published: March 24, 2026
Last updated: March 24, 2026

Maibot is a web crawler (web robot) developed by Master AI, Inc. ("Master AI", "us", "we", "our"), intended to find publicly available web content for the purpose of supporting our real or anticipated business goals, products, and services. Maibot has not yet launched. We are publishing this page in advance of its launch for transparency. The earliest we expect to begin crawling is July 1, 2026, but it might be delayed well past that date (or delayed indefinitely). When/if Maibot launches, we will update this page to reflect that fact.

When/if Maibot launches, it will operate according to these guidelines.

Respecting robots.txt

Maibot obeys the Robots Exclusion Protocol. Before crawling any domain, Maibot fetches and parses the site's robots.txt file and strictly follows all Disallow directives applicable to its user agent. We do not attempt to circumvent, ignore, or work around any instructions found in robots.txt.

Because Maibot is a new and relatively unknown crawler, most robots.txt files will not mention it by name. In cases where a site's robots.txt does not include a Maibot directive but does include a Googlebot directive, Maibot will follow the Googlebot instructions as a proxy for the site's general crawling preferences. We believe this is the fairest approach: it respects the spirit of the site operator's intent rather than exploiting the absence of an explicit rule. If Maibot becomes well-known enough that most sites address it directly, we will revisit and update this behavior accordingly. (Note: Applebot follows this same fallback behavior[ref].)

Crawl rate and server load

Maibot is designed to be a considerate guest. It respects the Crawl-delay directive when specified, and automatically throttles its request rate to avoid placing undue load on any server. If a server signals that it is busy or overloaded (for example, via HTTP 429 or 503 responses), Maibot backs off accordingly.

Identification

Maibot always identifies itself honestly via its HTTP User-Agent header. Website operators can identify Maibot requests by looking for the string Maibot within the User-Agent HTTP header. Maibot will crawl, for the foreseeable future, from the IP address 44.210.157.198; operators who wish to verify that a request genuinely originates from Maibot rather than an impersonator using our User-Agent string may check for this IP address.

Crawling scope

Maibot only crawls publicly accessible pages. It does not submit forms, create accounts, or interact with web applications in ways that could generate unintended side effects.

Link discovery via external databases

In addition to following links discovered through direct crawling, Maibot may also discover URLs through external search and link databases — such as the Brave Search API, GDELT, or other services — and then visit those URLs to retrieve the linked pages. In all such cases, Maibot still fetches and respects the robots.txt of the destination website before downloading any content, exactly as it would for links discovered through direct crawling.

Data use

We use the data only in support of our real or anticipated business goals, products, and services, to the extent allowed by law. We do not sell raw crawled content to third parties.

Opt out

Website operators who do not wish to have their content crawled by Maibot may exclude Maibot by adding the following directive to their robots.txt file:

User-agent: Maibot
Disallow: /

Maibot obeys the Robots Exclusion Protocol. See that specification for more details on the robots.txt format.

Contact

If you have questions or concerns about Maibot or wish to report an issue, please contact us at maibot@masterai.ai.