A race to release the first studio for building LLMs is in full swing
For those who have moved beyond the hype of ChatGPT and delved into the technology and architecture of language models, they know that the way software is produced is already changing. Still largely limited to the open-source community, which may be a trend due to the nature of the technology, initiatives like LangChain, combined with a handful of dedicated libraries for training, storage, management, testing, and deployment of language models, are rapidly growing and driving no-code tools. As I discussed in the post on Embeddings, LLMs act as agents in their own retraining (as they adopt unstructured data and unsupervised training), and their data is organized semantically. Tools like Pinecone and Chroma are emerging as options for vector storage—essentially acting as databases for language models, or rather, long-term retraining memories for AI. When combined with cloud-based no-code services like Render or Replit, on-demand LLMs and embeddings, a significant portion of which are dominated by OpenAI, come together with the first visually-driven product I have experienced and continue to be amazed by, called FlowiseAI. While the concept of no-code or low-code was already a hot topic, this business is about to heat up even more now.
Illustration showing how Chroma acts as a vector (semantic) document storage for LLMs retraining.
I will enumerate so that these concepts are better organized:
OpenAI provides language models and embeddings that serve as a foundation for training your model. LangChain provides libraries for you to use these LLMs and add new knowledge, allowing you to create chains. Pinecone and Chroma enable you to store your training in vectorized databases. Flowise brings all of this together in a visual interface that you can easily deploy on a no-code cloud platform like Replit or Render (which had its Series B investment approved at the end of last month). It's becoming easy to create your own AI agent.
FlowiseAI's visual interface allows you to build and assemble your agent using modular components, and you can test it right within the same window. You can connect external databases, REST services, and other resources. Finally, it is easy to export the generated code for the conversational or executable agent (which performs tasks on your behalf).
FlowiseAI is nothing less than what I've been expecting for months – a visual interface product that allows you to build conversational agents without getting caught up in technical jargon. You can construct everything using modular components, like chatbots or agents trained with your own data, ready to be embedded into your application to perform tasks or interact with your data. However, I still believe it's important for product managers, architects, and developers to understand concepts such as Vector Base, Embeddings, Prompt Engineering, and so on. Without this knowledge, you won't be able to fully explore this new way of building software.
But going back to Flowise, it is still in its early stages since it requires technical knowledge for installation and running it locally. It requires library installations and technical expertise to deploy it on any cloud, although they will be launching their own cloud soon. Nonetheless, the product is incredibly visual, and they recently received an investment round from YCombinator two weeks ago – a well-known story in Silicon Valley, invested by the very founders themselves. It has the potential to become the go-to platform for building AIs/LLMs in the short term, considering the ongoing competition between Azure and Google Cloud. In my tests, I was pleasantly surprised. I didn't expect such a rapid revolution in productizing all the emerging open-source technologies. Oh, and the people behind the development are not only getting rich but also quite friendly. They even responded to me on Twitter regarding a functionality that I was eagerly anticipating.
All this revolution gives me some insights and predictions:
1. Corporations will still take a few months to actually apply LLMs to problem-solving since they are not very open to the Open-Source culture. So far, Microsoft has not shown any easily applicable product for building LLMs. Governments will take the lead, with India, Japan, and South Korea emerging as frontrunners. Brazil also has great potential.
2. Chatbot companies that continue to operate as integrators or provide OpenAI services within their traditional structures will lose ground in the medium term.
3. It will become increasingly easier to produce conversational software (without UI) for performing activities such as analysis, comprehension, manipulation, and data analysis by the end of this semester. Hello, RPA consulting industry, take note! Knowledge workers, it's time to expand your range of capabilities.
4. Cloud services or low-code platforms will be the modus operandi for small and medium-sized enterprises. Corporations will still rely on some traditional computing in the long run, as they try to minimize risks.
Well, these are predictions. It's a topic I've been focusing on in recent months, and I believe that transformation comes through technology rather than cultural change. Sometimes, it is technology that drives us to change, just look at the impact that social media has had on our society. With Generative AI, the biggest impact will be on businesses. And it will be beneficial – more productivity, less repetitive work, more creativity, more profit. Let's go for it! 👾
Comentarios