Glasp’s note: This is Hatching Growth, a series of articles about how Glasp organically reached millions of users. In this series, we’ll highlight some that worked and some that didn’t, and the lessons we learned along the way. While we prefer not to use the term "user," please note that we’ll use it here for convenience 🙇♂️
If you want to reread or highlight this newsletter, save it to Glasp.
Recap: #1–#5 in one glance
Why Share This Now?
Looking back, it’s tempting to compress Glasp’s story into a highlight reel: YouTube Summary → AI Clones → millions of users. But the real story is messier—and more useful.
We share the Kindle Personality Test now because:
It shows the limits of “fun hacks.” Even with clever tech and viral mechanics, experiments without a real pain point rarely scale.
It reminds us (and other founders) to test boldly but listen closely. Side projects can surface new ideas, but they can also distract unless paired with core progress.
It still had hidden wins. The test failed as a viral product, but it nudged more users to import Kindle highlights, thereby boosting retention and discovery features that matter in the long term.
Growth isn’t a straight line. It’s a sequence of probes—some stick, some don’t, all teach.
Context: The “Next 1 Million Project”
Right after YouTube Summary took off, we formalized a slate of fast, public experiments we called the Next 1 Million Project. The goal wasn’t only acquisition; it was learning—probing the market for simple concepts that could spread and reinforce Glasp’s mission.
One contender was the Kindle Personality Test (a.k.a. Know Thyself). The pitch:
“We can sketch a personality snapshot from the titles of the Kindle books you read.”
Two reasons this felt promising:
Self-insight tools were booming (MBTI, 16Personalities). People like identity labels that are shareable.
Visualization creates value. When you turn raw logs into clean visuals (think Goodreads stats or Strava mileage), users often feel seen—and they share.
How the product worked
Input: You connect the titles of your Kindle books.
Processing: An LLM infers likely traits and maps them onto a 3×3 “trait grid.” We pre-curated a vocabulary (e.g., Curious, Deliberate, Optimistic, Diligent, Analytical, Discerning).
Output: A simple profile card with your 3×3 grid, a short narrative (“You lean X over Y, your reading suggests Z”), and a prompt to share.
Why titles, not highlights?
Copyright. Kindle highlight export is generally capped (~1% per book). Glasp offers a legal import route via Kindle Cloud Reader, allowing users to centralize highlights for note-taking and export them to tools like Notion/Obsidian. But for this project, to avoid gray areas and keep friction low, we used titles only.
Why a grid?
People tend to digest identity more effectively within a bounded visual frame. We’ve seen this pattern again and again: when you compress complexity into a neat, legible map, it feels insightful—even if the data is light. That’s part of the appeal we hoped to capture.
Launch & distribution
We shipped a clean landing page, demoed it on YouTube, posted on LinkedIn/X, and listed it on Product Hunt. The sharing loop was built in: users could post their grid as an image and tag fellow readers.
Result: Light curiosity, low conversion, and minimal sharing. It neither offended nor delighted; it just… existed.
The honest post-mortem: why it didn’t catch
Weak painkiller. YouTube Summary solved a burning problem (turn a 2-hour video into the gist, instantly). The personality test solved curiosity—fun, but not urgent.
Ambiguous value. The profile often read as “nice,” not “necessary.” Without striking accuracy or life utility, it couldn’t sustain word-of-mouth.
Narrow overlap. The intersection of Kindle readers who also want a personality snapshot is smaller than “people who need time back from videos.”
Perceived distraction. In community meetups, some power users asked, “Why this instead of deeper highlight features?” Another core user helpfully reframed it as a marketing probe, but the critique landed.
Shareability ceiling. Identity content spreads when it’s either strikingly precise or entertainingly bold. Ours was intentionally cautious; the share loop never ignited.
What still worked (unexpected upsides)
Activation for imports. Some users, for the first time, imported Kindle highlights into Glasp, then discovered our Daily/Weekly Highlight Review emails. That improved long-term retention.
Content flywheel. More imported highlights meant better Top Highlights per book on Glasp, which feeds search and discovery.
Legal clarity. By restricting to titles, we avoided copyright pitfalls and learned where the “safe lines” are for future LLM-powered features.
Lessons we’re taking forward
Pain > Play > Pretty. Pretty visualizations and playful experiences are accelerants, not engines. The engine is an urgent problem.
Earn the right to generalize. Personality claims require either lots of high-signal data or deeply personal context. Titles alone are often too thin.
Communicate with your core. Experiments are fine; reassure power users that the core product is still the priority (and show the roadmap).
Design for “who will share this, and why?” Sharing is a user story; if you can’t state the emotional payoff (status, identity flex, humor, surprise), the loop won’t spin.
Founder diary: remembering the meetups
Back then, we ran a monthly community Zoom (SF Friday). Conversations roamed far beyond product—study abroad plans, random soda availability, life updates. During the Kindle Personality Test window, a core user challenged the direction: “Why not ship highlight improvements first?” Another longtime user stepped in: “It’s a marketing experiment—they’re not abandoning core.” Both were right. The exchange reminded us to communicate intent early and ship core improvements in parallel.
Was it a failure?
As a growth lever? Yes. As a learning artifact? No. It clarified a boundary: curiosity alone rarely compounds. For compounding growth, we need tangible value, trustworthy references, and a share loop with emotional pull.
What’s next
Next episode, we’ll unpack other Next 1 Million bets we ran in the post-ChatGPT window—what moved the needle, what fizzled, and how we decided what to sunset vs. fold back into the core.
If you’ve got ideas—or want us to dive into a specific experiment—drop a comment. We read them all 🙏
Partner with Glasp
We currently offer newsletter sponsorships. If you have a product, event, or service you’d like to share with our community of learning enthusiasts, sponsor an edition of our newsletter to reach engaged readers.
We value your feedback
We’d love to hear your thoughts and invite you to our short survey.
Thank you for reading! We hope this post helps you, especially early-stage entrepreneurs, understand how marketing hacks work or not.
Leave a comment if you have any topics you want us to share.
FYI, we received this feedback from the audience.
"Way to level up your newsletter! I don't even use Glasp, but the newsletter has gotten a lot better!"
Not sure if we should laugh, cry, or just say thanks, but the quality of content is proven!
Really enjoyed how this breaks down the difference between experiments that entertain versus those that actually solve pain points. The Kindle Personality Test may not have taken off, but the hidden wins on retention and data clarity are a great reminder that even ‘failed’ probes can strengthen the core in the long run.