A crawler is a program used by search engines to collect data from the internet. When a crawler visits a website, it picks over the entire website’s content (i.e. the text) and stores it in a databank. It also stores all the external and internal links to the website. The crawler will visit the stored links at a later point in time, which is how it moves from one website to the next. By this process, the crawler captures and indexes every website that has links to at least one other website.
A search engine is a website through which users can search internet content. To do this, users enter the desired search term
A search term is what users key into a search engine when they want to find something specific. A search term can be a single keyword or a combination of words, e.g. “dentist” or “dentist Boston implant”.
An index is another name for the database used by a search engine. Indexes contain the information on all the websites that Google (or any other search engine) was able to find. If a website is not in a search engine’s index, users will not be able to find it.