The Tim Berners-Lee model for the internet, which was not built for commercial but for academic utility, always had a model for a search engine based on a simple text box which was used to access a database of scraped data from internet pages using web crawlers and spiders. This model has persisted for 25 years. Google ended up becoming the winner of the early search engine wars as it was perceived to have the best algorithm. However, even its competitors use the same model but try and use a better algorithm and maybe avoid using search cookies like Duck Duck Go. But the internet changed hugely from 25 years ago, and the search challenge is in need of a change based on what is happening now on the internet and using first design principles just like Tesla has done with the auto industry.
ChatGPT is already taking the internet by storm, and it is easy to see how this represents a threat to Google. A New York Times article said that this a “code red” moment for Google (Google’s ad business represents 80% of its revenue). An interesting article by Forbes magazine reported: “If ChatGPT Can disrupt Google In 2023, What About Your Company”. Many of the big corporates are dependent on paid ads to drive business, so this will not just affect Google. It does show that as unlikely as it may seem, Google’s business model can be disrupted despite its global dominance.
One of the reasons why Google has difficulty adapting is because of the huge amount of ad revenue it receives from the existing Ad Tech model. It is so profitable. It has become Google’s addiction that the shareholders always want more even though it is not sustainable. It is becoming increasingly clear that the Ad Tech model used by both Google and Facebook is dysfunctional, and it is getting worse. Agency Spotter has an article that identifies the six biggest problems with Ad Tech. Fraud alone counted for $81bn in 2022 and is forecast to increase to $100bn by 2023. As Google’s shareholders demand it increases revenue, so their search efficiency performance drops. Content providers optimize their pages by keyword stuffing just to get eyeballs and clicks, not to answer the questions you want answering.
Our first patent, USA patent No. 10977387, which is a utility patent for searching for goods and services, is completely different from its sister patent, US Patent Application No 17/980298, which is for searching for information. This is because the most efficient way to search for both is profoundly different. Our mechanism for searching for goods and services will always be much more accurate because our data will always be much better than Google’s. Our Goods and Services data capture is decentralized, and it is provided in real-time by the suppliers themselves on the platform. Moreover, this data is provided in real-time. Google’s data is largely gathered by scraping websites using web crawlers and spiders. This is a batch process and is nearly always at least a day out of date. This means there is a critical difference in terms of accuracy, especially around product availability. Our search for information mechanism has a search ranking influenced by the publisher and author quality. This quality metric is calculated using content consumption and whether the publisher or author has been regularly blocked or not. Because we only ever operate in apps and not browsers, this is difficult to game by bots. Google’s search ranking is dominated by SEO optimisation, how much companies pay for keywords.
When many people want to search using their mobile phone, whenever possible, they will use single vertical market apps rather than the Google app, which is just the 25-year-old text search. For example, when searching for Fast Food, accommodation, taxis, and flights, most people use dedicated single vertical market apps such as Just Eat, Hotels.com, Uber, and Skyscanner. People don’t like using the mobile keyboard and don’t like typing text into a text box. They would prefer to use easy-to-use mobile controls where you can just use your thumbs.