Categories
Uncategorized

Experience: Lessons from the 2023 AI Craze

As we integrate more AI assistants and chatbots into our workflows in 2024, one of the most valuable lessons I’ve learned from working closely with conversational systems like ChatGPT is that we need to approach them as collaborative, generative processes rather than expecting perfectly formulated final outputs right away.

The key is to have an iterative, back-and-forth conversation to get the best results. Rather than treating these AI systems as oracles that will produce complex solutions unaided in one go, think of having a dialogue where we provide context, examples and prompts to guide the conversation.

Most developers have probably already started using Github Copilot, Repl’s AI, or Cursor.so, and tons of web-based tools like windmill.dev, which all help by suggesting and inserting syntax and grammar, freeing me up to focus on higher level planning and architecture. By pasting in bug logs, snippets of code, and docs, I was able to quickly get useful code suggestions that would have taken much manual searching through documentation.

This example highlights a couple benefits of conversational guidance with AI: quickly solving the “needle in a haystack” problem of sifting through information, and also using the interactions as a learning aid for new developers.

Generative, Iterative Process

ChatGPT and other natural language AI tools represent a fundamental shift in how we interact with technology – rather than inputting precise queries and commands, we now have more flexible, conversational interfaces. This allows for a generative, iterative approach where both sides participate to produce the end result. Reminds me of my days learning about fuzzy search and teaching basic googling skills

The key is to have a back-and-forth dialogue, providing context and prompts to guide the AI. My go to for this is cursor.so when writing software. By copying bug logs and code snippets into the chat, Cursor’s chat (powered by GPT-4-1106-preview at time of writing) suggests relevant syntax and grammar and even entire functions (that are mostly working). This allows me to focus cognitive resources on higher level planning and program architecture rather than getting bogged down hunting for small syntax details in documentation.

The benefits of this conversational approach are twofold: first, it rapidly solves the classic “needle in a haystack” problem of locating some obscure piece of syntax from a vast documentation jungle. Second, it can aid learning by exposing developers to new patterns and approaches in Copilot’s code suggestions. Now that most tools are equipped with web search, you can simply follow-up and ask most bots to respond with the appropriate structure or give you more documentation to read on. 

The key is recognizing LLM chat interfaces like ChatGPT as conversational partners rather than just one-way input-output machines. By embracing this iterative mindset, we access so much more potential from AI. 

Tools for Different Purposes

While ChatGPT has exploded in popularity recently, it’s important to recognize that there are a range of AI assistants available, each with different strengths. Through my work, I’ve gained early access to tools like Anthropic and Claude that have expanded my perspective on AI’s capabilities.

For more open-ended creative tasks, I’ve found Anthropic to have very impressive natural language understanding and an intuitive conversational flow. The tool almost feels like chatting with a human teammate when brainstorming ideas or narrative content.

For generating synthetic media like images, video, and audio, Invoke.ai and Krea.ai take the top. While services like DALL-E 2 and Midjourney pioneered AI art, I love the “infinite canvas” feeling I get in Krea/Invoke when designing and iterating. 

My point is that while ChatGPT makes headlines, prompt engineering and conversation design are critical to effectively apply AI in real world applications. Focus on customizing prompts to take advantage of different tools’ strengths. Whether I’m drafting code in Cursor or writing a blog post with Anthropic ;), I design the conversation flow around what I’m trying to achieve.

The key is selecting the right tool for the job based on its capabilities, then guiding it with strategic prompts and examples. Rather than expecting any single AI assistant to excel at everything, prompt engineer. 

AI Cautions

As powerful as assistants have become, it’s important to be aware of some key limitations, especially when trying to get them to produce high-level plans and architecture beyond their current capabilities. Could this change in 2024? Autogen looks promising, but who knows. 

I learned this lesson the hard way when working on a coding project. After successfully using ChatGPT to generate small code snippets, I asked it to design the overall program architecture for me. Despite providing context and documentation, the result it produced was fragmented and incomplete and ultimately required major refactoring.

The key takeaways here are:

Be strategic in how you apply AI tools – understand their strengths and limitations

For complex tasks, be prepared to provide more upfront guidance through examples, documentation etc.

View it as a collaborative effort – don’t expect fully formed solutions unaided

I’m still very optimistic on AI’s potential, but learned through experience that we need to guide these models patiently rather than expecting fully autonomous reasoning capabilities beyond their training. The fruits come through collaboration.

Good luck out there!

Leave a Reply

Your email address will not be published. Required fields are marked *