Update on MVP for My Latest Project

Current Project: Reading books about entrepreneurs and sharing what I learned from them

Mission: Create a library of wisdom from notable entrepreneurs that current entrepreneurs can leverage to increase their chances of success

With the help of a developer friend, I’ve created an MVP—two, really—for my current project. I’m starting by trying to solve two personal pain points:

  • Create a digest, for each book I read, that I can review and use as a reference. This was taking me 10 to 15 hours manually. The output was valuable, but the manual process wasn’t sustainable.
  • Create a library of all the books I’ve read. I want to be able to query all these books to help me solve problems or figure out the next action to take. Entrepreneurs with photographic memories who read a lot have a superpower. Their minds query everything they’ve read when they’re solving problems and make connections that uncover new insights. Those new insights help them develop unique solutions to problems or identify the unconventional next action to take given their goals. This is a huge advantage for these entrepreneurs. I don’t have a photographic memory, but I’m an avid reader. I want to use AI to help me remember and make connections between everything I’ve read. In other words, I want AI to help me develop this superpower.

I’m using an AI setup to create a digest of each book, and I’m impressed with the results. The quality isn’t as good as what I can manually create, but I see a path to getting it there. It’ll take work, but I think it can be done. One cool thing the AI has been good at is adding the frameworks an entrepreneur used to achieve success to the end of each digest. I didn’t include this in the digests I created manually, but having them in one place is helpful.

I’m using a separate AI setup to get answers to questions based on the contents of multiple books I provide. I’ve found a way to do this using retrieval augmented generation (RAG), which I’m happy about because I’m not technical. RAG allows you to provide a knowledge base to AI to complement its large-language model (LLM). In theory, this should result in more accurate and detailed responses. I’m finding that the answers are accurate, but they’re too high level to add value. They’re not detailed enough. When I ask AI to provide more in-depth responses, it doesn’t meet my expectations. I’m not sure if it’s the setup I’m using, my prompting, my system instructions, or something else. I’m using the same LLMs for both setups. I’m hoping the problem is with the instructions I’m providing, not the setup. But I’ll keep testing to see.