ChatGPT, first impressions and applications

Published by Jorge on

Strage computer

I haven’t explored all AI-related developments since ChatGPT’s launch, but here are some notes to revisit in a couple of years.

ChatGPT just prints chained probable words, which might make you think that it’ll just regurgitate what it ingested, or that nothing interesting can come out from it. It could seem just like a natural language search engine.

However, we’re discovering GPTs can create lots of stuff by just chaining probable words and accomplish several tasks that you wouldn’t think of. Nonintuitively, using simple rules, we can create lots of complexity. And this could be the a key for the next information revolution. For me, chat models look like a new computational primitive.

Using a Chat model

First of all, don’t be fooled by its natural language interface. It’s a model and you have to treat it like that. Proper usage of chat models is a new field called Prompt Engineering which is the use of techniques to obtain desired results from the model.

I remember when Google launched how people couldn’t find information as easy as today after SEO invention because searching was harder, you had to know how to search for stuff (and still, but it’s kind of easier). Likewise, using chat models requires to employ an specific mindset to be effective.

Simplifying, proper usage of a chat model requires clear instructions, context, and specific constraints. Sometimes you will need to try at least twice to get a proper result, and answers should always be double checked just as anything you find on the internet. Generally, it works better easing the work of an expert rather than being used by someone who lacks any knowledge of the field being queried.

A step after the chat model

Things are going so fast that we already have new applications by enriching the chat model primitive with new features. Some of the first applications that I have seen are:

  1. Next-gen search engines like Bing Chat, which elevate the search experience by browsing through links. Since models are trained on static data, by letting the model to obtain information by browsing, it can provide new information and get updated context to answer questions.
  2. ChatGPT plugins that transform the classic command line interface and APIs into a natural language interface, much like how the iPhone simplified complex interactions.
  3. Code completion models integrated in development environments like Github Copilot (and recently Amazon Code Whisperer) provide new ways of generating code that increase development efficiency.

Architecting new applications

More complex applications involve some tooling, such as those provided by the LangChain toolchain. Check it out if you want to build anything complex with AI.

The most important technique I saw is using embeddings and vector databases to provide memory to the models. Embeddings are a way to compress and map text to a vector space that can be efficiently stored and indexed. Vector databases are used for efficiently querying embeddings. Mapping text to vector space allows to calculate similarity of a text, useful for searching.

Some relevant examples I saw are:

  1. Indexing video speech as queryable data, allowing for the creation of smart, domain-specific search engines. An example is the index of all Lex Friedman podcasts.
  2. GPT agents, which are autonomous software that run recursive queries to achieve better accuracy and precision in various tasks, such as market research, text analysis, and product comparison. AutoGPT and BabyAGI.

Lastly, the most impressive application might be simulating a smart video game town with human-like NPCs. Using a more complex architecture with ChatGPT to provide context, memory, and time perception. Creating coherent human-like behavior of video game non playing characters.

Local AI execution

But there’s more. Besides growing complex applications there’s another axis of AI development: local AI execution. Executing AI locally frees the user of the usage of external services and third parties.

There are projects trying to run ChatGPT competitors locally. A big obstacle of AI adoption is privacy. Companies cannot assume providing confidential data to OpenAI models. As soon as local models are competitive enough, I expect all companies to run their own AI stack to improve organization efficiency. Open source models like LLaMA will provide the foundation for these stacks. See turbopilot as a code completion example.

But in the same axis, besides local corporate AIs, I expect smaller personal AI to develop. User’s need for privacy, reduced costs and improved latency will start using smaller personal AIs based on “compressed” models with less parameters. We already have examples running locally with only 4GiB of memory. I expect in a close future, everyone having an AI like now we all have a smartphone.

Closing words

The impact this technology will have on all information-related work will be a huge challenge for the society. Some are already concerned that we might be closer to the Artificial General Intelligence and its dangers.


If you want to read more like this, subscribe to my newsletter or follow me on Twitter