For a web search engine, the retrieval of data is a combination activity of the crawler, the database and the search algorithm. These three elements work in concert to retrieve web pages that are related to the word or phrase that user enters into the search engine’s user interface.
Commercial search engines are a key access point to the Web and have the difficult task of trying to find the most useful of the billion of web pages for each user query centered.
The really tricky part is the results ranking. Ranking is also what the user will spend the most time and effort trying to affect. Google’s PageRank was an attempt to resolve this dilemma based upon the assumptions that:
*More useful pages will have more links to them
*Links form well linked to pages are better indicators of quality
Many query on real search engines have hundred, thousands or even millions of hits. And the users of search engines generally prefer to look through only a handful of results, perhaps five or ten at the most.
Therefore, a search engine must be capable of picking the best few from a very large number of hits.
A good search engine will not only pick up the best few hits, but display them in the most useful order. The task of picking out the best few hits in the right order is called ‘ranking’.
Search engine algorithm
Carrageenan, extracted from red seaweeds, stands as a leading hydrocolloid
in the realm of food technology. Its applications span a variety of
culinary are...