Brad Porter

An Exploration Into Autonomous AI

An Exploration Into Autonomous AI


Artificial intelligence has become a transformative force across numerous industries within the rapidly changing technology landscape. Its potential to automate tasks, enhance decision-making processes, and improve overall efficiency has captured the attention of organizations worldwide. Eager to harness the power of AI, companies often find themselves navigating uncharted territory, experimenting with new tools and technologies to stay ahead in this digital age.

One such endeavor led us to the world of autonomous processing of tasks through LLMs. Two widely known examples include AutoGPT and LangChain. AutoGPT accepts user input, breaks down the request into tiny tasks, and then processes them in-line with the ultimate goal being to complete the request in a very thorough manner. LangChain on the other hand is more focused towards developers as it allows the implementation of a similar flow but requires you to write considerably more code. The downside being that it introduces a layer of abstraction that may inhibit insight into what’s going on behind the scenes. As a result, and based on the amount of code, it may be more effective to simply write the logic from the ground up for full control and understanding of what’s going on when your solution fails.

The goal of this writing is to detail the strengths and weaknesses of the options available as well as pitfalls that may be encountered. The journey into autonomous AI tooling serves as a testament to the early stages of AI implementations and the importance of understanding its capabilities and limitations before fully integrating it into one's workflow.


An alternative, and very different experience is LangChain—a developer-centric solution designed to empower users in integrating chaining, agents, and LLMs directly into their code.

LangChain boasts an impressive array of tools and functionalities. A brief description of it’s toolset includes:

Chaining - The ability to combine various pieces of functionality into a single process. Such as receiving user input, formatting it in a certain way then giving it to the LLM to be processed.

Persistent Memory - Saving state of previous interactions in order to provide context for each interaction with the LLM.

Agents - A layer in between the user and the LLM that contextualized user input in order to determine what kind of tools or external pieces of data it has access to that would be useful for the request, selects and utilizes said tools, and then parses the results in order to respond to the user or continue along with its designated task.

Data Connections - External data stores that are able to provide more information then the model is trained for. This includes text, documents, and vector databases.

While LangChain does have a large set of options and customizability there are some additional challenges. The abundance of resources and the complex nature of the implementation can make it daunting for novices. In order to fully grasp its vast toolset and how each component could provide benefit, one would require a significant investment of time and effort.

The other note to make is that it also means locking into it as a dependency and coding style, as that’s the only way to truly benefit from the large array of functionality that exists. This creates a black box approach as there’s a lot of magic happening behind the scenes, which is fine until something breaks and then it’s more effort to determine what’s wrong then it would have been to write the code from scratch.

Build Your Own Implementation

Ultimately after we ended our investigation of both AutoGPT and LangChain we were armed with the knowledge of how the bleeding edge tooling works, while simultaneously contrasting it with how we expected it to. As a result we determined there was value in building some of these concepts out in-house in order to maintain absolute control of every step and we even ended up creating a standalone product using this customized logic called BrainConductor.

One of the significant advantages of building our own implementation was the deep understanding we gained about the nuances of utilizing large language models effectively. Through trial and error, we discovered the optimal ways to structure prompts, handle context, and provide feedback to the models. This knowledge empowered us to achieve more accurate and tailored results. The process of building our implementation was surprisingly swift. In just a matter of weeks, we were able to put together a proof of concept that demonstrated the value and potential of our tooling.

The journey of building our implementation also provided us with valuable insights into the trade-offs between off-the-shelf tools and custom solutions. While off-the-shelf tools may offer convenience and initial speed, they often come with limitations and may not fully align with specific needs. By developing our own implementation, we were able to address these limitations head-on and create a solution that truly catered to our requirements.

Ultimately, the decision to build our own implementation was driven by the need for control, customization, and a solution that aligned precisely with our requirements. By taking this route, there was an invaluable and deeper understanding acquired. Looking ahead, we are considering the next steps, with one option being to integrate our implementation into the capabilities offered by LangChain.


In conclusion all three options present have their pros and cons.

AutoGPT is fantastic when there’s a large amount of work to be accomplished and time is not a factor. As it is refined and becomes more successful, there is so much potential for it to replace a massive amount of tasks.

LangChain is great for those that do not need to know or have absolute control of the internal workings of the tools it offers. It’s backed by an army of open source developers and without a doubt will continue to become stronger as time goes by.

Finally, while building out functionality in house requires a team of knowledgeable developers, it provides the ability to understand how internals should function and allows for full customization which for some cases may be a necessity as opposed to being locked down to existing standards set by other options.

Stay tuned for our future posts where we’ll be open sourcing Brain Conductor, our AI personality platform along with more about our experiments and learnings.