Posts from 

November 2024

(0)
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.
This is some text inside of a div block.

Last Week’s Struggles and Lessons (Week Ending 11/10/24)

Current Project: Reading books about entrepreneurs and sharing what I learned from them

Mission: Create a library of wisdom from notable entrepreneurs that current entrepreneurs can leverage to increase their chances of success

What I struggled with:

  • No material struggles this week.  

What I learned:

  • Getting the “book library” MVP to provide quality results that add value is the priority. After that’s accomplished, I can start thinking about how to put it in the hands of other users. Trying to figure out the path to allowing others to use it publicly was premature. I need to get this thing working and adding value first; then I can figure out how to share it.
  • I’ve been reading up on retrieval augmented generation (RAG) because the MVP isn’t working as intended. RAG has more limitations when you feed it a ton of information (e.g., multiple books) than I initially thought. It struggles to make connections between related information, but that’s essential. If the MVP can’t do that, providing value-added responses will be hard.
  • There’s a good chance that the Google Cloud Platform throttling of my account is impacting the depth of results I get. This is frustrating because you don’t get a warning or confirmation of throttling.
  • AI is good at many things, but it isn’t yet good at making sense of large, unstructured text data sets like books. Creating a structure or taxonomy for this kind of data could unlock what AI can do with it.
  • Google makes it easier for nontechnical people to test with and tune Gemini large-language models (LLMs). The throttling has me thinking about adding LLMs from other companies into the testing.
  • The more I learn from this project, the more I respect the human brain and its ability to store and process information from books.

Those are my struggles and learnings from the week!

Book Library MVP Learnings

This week, I worked on the “book library” MVP. My goal is for the MVP to mimic an entrepreneur who’s an avid reader with a photographic memory. I want to query across multiple books, and I want the MVP to make connections that uncover new insights. I also want it to recall any book’s details quickly.

I’m using retrieval augmented generation (RAG) and Gemini LLMs in the MVP. Last week’s testing yielded responses that weren’t detailed enough and didn’t uncover new insights. I was happy to have something working (it felt like a big milestone), but it still needed work. This week I tried using a different RAG setup—a separate agent and RAG to index each book instead of one agent and RAG to index multiple books. I also tested different prompting and system instructions. The changes didn’t improve the responses, which was frustrating. Still too high-level and unable to make insightful connections.

I’m not sure why this is happening. My developer friend and I have a few theories. It could be a limitation of RAG not being great at indexing entire books. It could be limitations with Gemini LLMs, technical limitations imposed by Google Cloud Platform (GCP), or something else. Given that the output I’ve been able to generate from Google AI Studio for an individual book has been pretty detailed, we think there’s a high probability it could be a GCP limitation.

This wasn’t the outcome I was hoping for at the beginning of the week, but that’s part of the process when you’re building something that hasn’t been done before. Definitely frustrating, but such is life. We’ll do more testing to try to figure this out.

Comparing Google Gemini LLMs

I’m using Google AI Studio to run one of the MVPs for my book project. Google’s AI is called Gemini, and there are eight different Gemini large-language models (LLMs). Determining which would yield the best result was a concern. Google had thought it through, though: AI Studio has a compare feature: you can ask a question and select two LLMs, and Gemini will provide responses from both of them in a side-by-side view.

I’ve been testing prompting and system instructions this week, and the compare feature has been helpful. Seeing how the different LLMs respond to the same question is helping me narrow my choices faster.

Google AI Studio has limitations, but it’s a good tool for someone who is nontechnical and wants to fine-tune their AI experience.

Personal Hack: Learning New Technologies

I’ve spent the last few weeks diving into Google’s AI Studio, NotebookLM, Vertex AI Agent Builder, and various other AI-related tools from Google and other companies. A developer friend has helped me a lot. I was aware of some of these technologies from reading about AI and LLMs in general, but now that I’m trying to use them to create solutions for my personal project, my understanding of them has gone much deeper.

I have a clear idea of what I want the technology to do. I’m trying to figure out if it can specifically do what I want. If so, what are all the ways? What are the implications of each option? What I learn sticks in my memory. This is different from my normal approach of poking around to understand a tool’s general capabilities, which doesn’t result in good retention.

I’ve also noticed that when I seek help from technically oriented people to learn new technologies, describing the problem and how I want to solve it helps tremendously. It gives them a better idea of where to start, and the conversation is more focused on solutions to my problem than on a broad overview of the technology.

I’m not sure which, if any, of these technologies will be part of the solutions I build. But I’ve learned something: If I have a problem I’m excited to solve, I should try using new technologies to create a solution. Worst case, I’ll gain a better understanding of the technologies. Best case, I understand the technologies better and create a solution to my problem.  

Klaviyo CEO on Tech IPO Criteria

The IPO market for technology companies has been slow (see here). I’ve been curious why that’s the case (see here). Klaviyo is a known technology company that IPO’d in September 2023. I came across an interview with the CEO and co-founder, Andrew Bialecki. The interview caught my attention because he discusses initially bootstrapping and growing to over $1 billion in revenue and a market capitalization (i.e., valuation) of over $10 billion as of this writing.

One section of the interview addressed what he thinks the criteria are for technology companies to go public or, said differently, what a company needs to demonstrate to get public market investors to buy its stock and have a successful IPO. Here are the criteria:

  • Positive free cash flow – the company needs to generate, not consume, cash.
  • Sustainable business – The company provides a product or service that customers will value in future years.
  • Durable growth – The company must be growing at a healthy rate. The smaller the revenue base, the higher the growth rate investors want to see. The growth rate must also be durable for the next four or five years.

Growing at a rapid rate that’s durable while not burning money isn’t easy to do. Many technology companies can achieve high growth rates, but they burn a ton of cash to accomplish this.

Bialecki’s perspective on the current IPO market for tech companies is valuable, given he’s one of the few who has successfully completed a technology IPO in the last two or so years.

He shares other great nuggets during the interview. If you want to hear just the section on his thoughts on IPOs, see here, but I found the entire interview worthwhile.

Back to One Book a Week

Last week, I shared that I wanted to finish Master of the Game: Steve Ross and the Creation of Time Warner by Connie Bruck and another book. Well, I fell short. I finished reading the Ross biography and started—but didn’t finish—another book.

The goal was aggressive and I wanted to check the box, but I didn’t. No excuses. I just came up short. I put a good effort toward the goal but ran out of time.

This week, I’m going to focus on finishing a single book. Every time I try to do more than that, I regret it.

Weekly Update: Week Two Hundred Forty

Current Project: Reading books about entrepreneurs and sharing what I learned from them

Mission: Create a library of wisdom from notable entrepreneurs that current entrepreneurs can leverage to increase their chances of success

Cumulative metrics (since 4/1/24):

  • Total books read: 35
  • Total book digests created: 14
  • Total blog posts published: 210
  • Total audio recordings published: 103

This week’s metrics:

  • Books read: 1
  • Book digests created: 2 (using technology)
  • Blog posts published: 7
  • Audio recordings published: 0

What I completed this week (link to last week’s commitments):

  • Read a biography about Steve Ross, founder of Time Warner and Warner Communications
  • Added two books to my “book library” MVP
  • Tested prompts, system instructions, and LLMs to improve the quality of responses from the “book library” MVP
  • Created two book digests via my “book digest” MVP
  • Tested prompts, system instructions, and system settings to improve the quality of AI-generated book digests

What I’ll do next week:

  • Read a biography or autobiography
  • Read my highlights from David Allen’s Getting Things Done
  • Read my highlights from Tiago Forte’s Building a Second Brain
  • Read two resources on prompt engineering
  • Test different prompting for the “book digest” MVP
  • Test adding a book’s contents to the “book library” MVP in different ways to improve response quality
  • Identify the path to launching the MVPs publicly so others can test them

Asks:

  • None

Week two hundred forty was another week of learning. Looking forward to next week!

Last Week’s Struggles and Lessons (Week Ending 11/3/24)

Current Project: Reading books about entrepreneurs and sharing what I learned from them

Mission: Create a library of wisdom from notable entrepreneurs that current entrepreneurs can leverage to increase their chances of success

What I struggled with:

  • No major struggles this week, just frustration. I couldn’t allocate as much time to this project as I wanted to. I had commitments to other projects that I had to make my priority this week.

What I learned:

  • Google’s Gemini large-language models (LLMs) don’t provide in-depth responses when you feed them multiple books via retrieval augmented generation (RAG). When they’re fed a single book, the responses are much better. Feeding too much “raw” information at once is counterproductive.
  • I described the problem I’m solving in a way that “feels” more accurate to me and that resonated well with investors when I pitched using it: “Entrepreneurs with photographic memories who read a ton have a superpower. I didn’t win the genetic lottery for photographic memory. I’m trying to solve that problem.” See this post for more details. I wonder how many other entrepreneurs have the same problem and are aware of it?
  • My excitement about this project has jumped since my developer friend began helping me. Having someone with contrasting skills to talk with and work through hurdles with has made a noticeable difference.
  • I haven’t addressed what to do with the highlights I make in a book. I want to figure out how to feed them into an LLM.
  • I need to retain what I’ve highlighted. Maybe create something that helps me review highlights. Or maybe pump my highlights into an app built for that purpose.
  • Prompting is a big deal, and I have a lot to learn about creating better prompts.  

Those are my struggles and learnings from the week!

Update on MVP for My Latest Project

Current Project: Reading books about entrepreneurs and sharing what I learned from them

Mission: Create a library of wisdom from notable entrepreneurs that current entrepreneurs can leverage to increase their chances of success

With the help of a developer friend, I’ve created an MVP—two, really—for my current project. I’m starting by trying to solve two personal pain points:

  • Create a digest, for each book I read, that I can review and use as a reference. This was taking me 10 to 15 hours manually. The output was valuable, but the manual process wasn’t sustainable.
  • Create a library of all the books I’ve read. I want to be able to query all these books to help me solve problems or figure out the next action to take. Entrepreneurs with photographic memories who read a lot have a superpower. Their minds query everything they’ve read when they’re solving problems and make connections that uncover new insights. Those new insights help them develop unique solutions to problems or identify the unconventional next action to take given their goals. This is a huge advantage for these entrepreneurs. I don’t have a photographic memory, but I’m an avid reader. I want to use AI to help me remember and make connections between everything I’ve read. In other words, I want AI to help me develop this superpower.

I’m using an AI setup to create a digest of each book, and I’m impressed with the results. The quality isn’t as good as what I can manually create, but I see a path to getting it there. It’ll take work, but I think it can be done. One cool thing the AI has been good at is adding the frameworks an entrepreneur used to achieve success to the end of each digest. I didn’t include this in the digests I created manually, but having them in one place is helpful.

I’m using a separate AI setup to get answers to questions based on the contents of multiple books I provide. I’ve found a way to do this using retrieval augmented generation (RAG), which I’m happy about because I’m not technical. RAG allows you to provide a knowledge base to AI to complement its large-language model (LLM). In theory, this should result in more accurate and detailed responses. I’m finding that the answers are accurate, but they’re too high level to add value. They’re not detailed enough. When I ask AI to provide more in-depth responses, it doesn’t meet my expectations. I’m not sure if it’s the setup I’m using, my prompting, my system instructions, or something else. I’m using the same LLMs for both setups. I’m hoping the problem is with the instructions I’m providing, not the setup. But I’ll keep testing to see.