Search's Future & Sequoia CEO Meet - Part 2

Search's Future & Sequoia CEO Meet - Part 2
Find out how small businesses can leverage this emerging technology and the steps to set up an on-premise Language Model (LLM) with in-context learning.

👋 Hello, my name is Andrew Amann and am one of the founders for I am a founder and CEO of a Product Engineering and Design Studio called NineTwoThree where we build web, mobile and AI apps. Our team builds ML / AI models for enterprise companies and through osmosis and conversations, understands the applications that are going to make a larger difference than ChatGPT. 

LLMs are here, and while all of you on Twitter are watching people build tools on top of ChatGPT and bragging about quick 💰, be glad that you are here to learn about what the big boys and girls are doing with AI.

While we all gloat about the simplicity to log in to our favorite LLM and ask questions until our ♥️'s content, not everyone is excited about the ease of use.

Amazon has restricted all of its employees from using Bard, ChatGPT, or any other LLM. Why? Security. Is this a mistake? No. It's going to become the norm.

Because what many do not realize is that when www was created, the same security problems existed. Well, it was simpler back then - there were very few computers 💾 connected to the internet - but very quickly firewalls were built. And the enterprise companies were protected 🔒.

LLM's pose the same risk. Instead of being restricted on which www you can search inside your company's firewalls, Amazon is restricting which LLMs you can use. Because anything you type into the LLM is always and forever public.

Even Law Offices are restricting the use of LLM’s because if the Lawyer types in confidential information to formulate the agreement; well, that is a violation of their attorney-client confidentiality pledge. 🤝

So don't go thinking lawyers will be extinct soon - the only thing that will happen is their prices will decrease as software agencies build out private LLM's for their offices. (More on this soon.)

But what about all the tools on Twitter that are "making life easier" for lawyers, accountants, tax advisors, content writers, filmmakers, and every person's job ever created? Well, those will all become use cases for agencies and internal enterprise teams of how to build secure tools for real-world use cases.

I am not saying that all tools are junk. But the Lindy effect proves that if you build something in a month, it will die in a month. Things need to be built to last for customers that plan to use your product for years - not microseconds.

So if “new tools” will die, how will AI advance? 🪦Bard is now released publicly to the world and destroys any tool that was created on ChatGPT promoting a “fine tuned” data set. I’m looking at those Jasper style “make me something in my brand voice” tools by uploading all my website content then hitting “Give me a hero image”.Double 🥱🥱

But something is happening that no one saw coming. It is larger than ChatGPT; it is not human-created, and it is shocking data scientists. It also, in theory, should replace most tools you see being built on top of ChatGPT because, well, it's emergent.

It’s called “In Context Learning” and Enterprise clients are drooling over its capabilities. 

What Is In-Context Learning❓

In-context learning is a type of machine learning where an AI model learns to perform a task by observing examples of the task being performed in context. This is in contrast to traditional machine learning, where an AI model is trained on a dataset of labeled examples of the task being performed.

Kind of like asking an intern to label all the movies in Netflix 🍿by genre. 🥱

Previously, you would need to hire a team of laborers to go into Netflix and label movie after movie after movie after movie of alllll the genres that exist. What even is Neo Noir anyway? Then, after it's properly labeled, finally a data scientist can come in and create predictions on the dataset so that after you're done watching  “10 Things I hate about you” you can roll right into “Never Been Kissed.” Yes, 90s movies are the best. 

With in-context learning, you can instead train your AI model by observing examples of how humans are labeling those movies based on the context of the movie itself.  Maybe, the machine notices that if the word “Love” appears more than 4 times then it’s a Romantic Movie. But it is not doing this in post-production. The machine is inferring the data from only a few labels that the human provided. Then predicting the rest of the labeling by some other content it discovers about the movie itself - sometimes even things humans don’t notice. 

Maybe there are millions of movies and after 20 labels, the machine can start self labeling the genres. That is a ridiculous time saving tasks to prepare information for prediction models. 

This. Is. Mind-blowing. 🤯

That Is Great But How Can I Use In Context Learning For My Small Biz?

If you run a small business, SaaS product or even brick and mortar store 🧱 then there are still ways you should be educating yourself to prepare your biz. Because if you don’t your competitors will….

Improve customer service:

In-context learning can be used to train AI models to answer customer questions in a more accurate and efficient way. This can free up human customer service representatives to focus on more complex tasks.

Personalize marketing campaigns: 

In-context learning can be used to track customer behavior and preferences to personalize marketing campaigns. This can help businesses to target their marketing efforts more effectively and improve their return on investment (ROI).

Automate tasks:

In-context learning can be used to automate tasks that are currently performed by humans. This can free up human employees to focus on more strategic and value-added activities.

Make better decisions: 

In-context learning can be used to analyze data to identify trends and patterns. This information can be used to make better decisions about business operations.

Here are specific examples of how a biz can use it…

  1. A small coffee shop could use in-context learning to train an AI model to recommend coffee drinks to customers based on their past orders and preferences.
  2. A small clothing store could use in-context learning to track customer behavior in the store to personalize marketing campaigns and product recommendations.
  3.  An accounting firm could use in-context learning to automate tasks such as data entry and billing, freeing up human employees to focus on more complex tasks.

Here are the steps on how to set up an on-premise LLM with in-context learning:

  1. Choose an LLM platform. There are a number of LLM platforms available, both commercial and open source. Some popular options include Google AI's LLM, OpenAI's GPT-3, and Hugging Face's Transformers.
  2. Install the LLM platform. Once you have chosen an LLM platform, you will need to install it on your on-premise server. The installation process will vary depending on the platform you choose.
  3. Configure the LLM platform. Once you have installed the LLM platform, you will need to configure it. This includes setting up the environment variables, such as the path to the data directory and the model parameters.
  4. Collect data. In order to train the LLM, you will need to collect data. This data can be in the form of text, images, or audio. The type of data you collect will depend on the task you want the LLM to perform.
  5. Label the data. Once you have collected the data, you will need to label it. This means assigning labels to each piece of data. The labels will help the LLM to learn the task you want it to perform.
  6. Train the LLM. Once you have labeled the data, you can train the LLM. This process can take several hours or even days, depending on the size of the data set and the complexity of the task.
  7. Deploy the LLM. Once the LLM is trained, you can deploy it. This means making it available to users. You can deploy the LLM on a web server or a mobile app.

This Is When You Pull Out Your Reading Glasses 🧐and Read The Fine Print. 

LLM’s are going to go private. Secure models will be built inside of network firewalls and provide executive assistant level help to all employees inside a company. 

Workflows will be created, classifications will exist and prompting will be rampant. But all inside a company firewall. And all built internally for that specific company. 

You see, it's important - ahem, vital -  that none of this information leaks. How a company “prompts” will be IP. How a company obtains data for it’s LLM will be IP. How a company filters data for their clients and customers will be IP.  

So Lawyers will definitely be needed to protect this new paradigm. Told you we would come back to talking about Lawyers. 

But what do you need to know about your multi-thousandaire company (or millionaire for those lucky few) ?

You need to know that big products are being built to “suck up” all the ChatGPT tools that you see in the wild today. These big products are coming from companies such as Facebook, HuggingFace, Microsoft, and Google and they will be deployed inside of their ecosystem. 

For example, all of your “MidJourney” apps will be replaced by Adobe as it trains its own LLM to produce Photoshop style image editing. Their LLM will self learn from all of its users what types of editing the images require and apply those to all the future customers asking for the same edits. 

Google will deploy search capabilities into Google Docs so that you can quickly create workflows for HR, Legal, Engineering etc - and categorize documents accordingly. Since no enterprise uses Google Docs - Dropbox will start deploying their own version of an LLM to their enterprise customers that will ensure security and scale. This will allow people to work INSIDE of a safe environment while they prompt and pitter patter on their keyboards. 

As for Amazon? Well - they already have many many many internal LLMs doing specific tasks. They have for years - it’s nothing new for them to figure out how to keep data secure. What is new, is that their LLMs can get smarter through in context learning based on the new discoveries in AI. 

So keep playing with your SelfGPT apps and photoGPT tools. I am not sure about the future of Copy.AI - it’s hard to see a use case for it with Bard now. But then again, people still use Whereby even though Zoom, Google, Teams, and whatever Meta calls their video conferencing tool exists. 

Surely some will last. But most will die as enterprise gets warmed up with their new found abilities to perform in context learning on their large datasets. 

As for agencies? Start reading CoHere blogs and learn how Pinecone works - because that is the future of development imo. 

Subscribe to the Caveminds AI newsletter here.
Andrew Amann
Andrew Amann
Building a NineTwoThree Studio
Subscribe To Our Newsletter