This is really wonderful - I've been talking my friends' ears off about this interview all week and sending around this article. Three thoughts:
1. Unfortunately, I think the attention-grabbing thumbnail might be turning people off from an otherwise very high-quality discussion. I think people see a contradiction in my introduction of the topic as "the internet is absolutely desperate to grab your attention no matter the cost and here's what can be done to fix it", but then having a thumbnail that uses the same strategies to draw attention. Ditto with the autoplaying video. I don't have a strong suggestion here, since some people will respond positively to that, but one remedy could be to make another post that's only the transcript with no images or video. Then I could share that on more serious channels and possibly get more serious engagement.
2. Does a recommender system anything like what Ivan described exist? I'd happily pay for a subscription to try it, tell my friends, and give user feedback.
3. For sourcing the actual content that the recommender system chooses from, something like Lemmy could be good for reddit-style text posts and comment trees. And given the mission, maybe companies with existing high-quality content like Nebula and Headspace would be willing to license content at-cost.
> Does a recommender system anything like what Ivan described exist? I'd happily pay for a subscription to try it, tell my friends, and give user feedback.
I mean, doesn't seem like that great idea to me to fry your brain, but <https://arxiv.org/abs/2302.01724> "RLUR (reinforcement learning on user retention) has been fully launched in Kuaishou app for a long time, and achieves consistent performance improvement on user retention and DAU."
Ah, sorry for the confusion, I was referring to the part where Ivan said:
> But we can totally do that now. We can interview them with language models. A language model could ask, "What do you care about? What was the happiest moment of your last month? How could we make that happen more often?" It could just figure that out. The technology is more than ready. And yet, as far as I can tell, no one has built a recommender system that does anything like this.
I guess he directly says that nobody's doing it now. But if it turns out that anyone has this in the works I'd love to try it.
Hey! I've changed the thumbnail for this post, consider sharing it now with the more serious channels you were considering.
At the same time, we're still subject to the pressure that if we want to have a serious forecasting effort, we need 100 people to pay us 1% of their income to have 1FTE of capacity, for which we need 8K people to be paying attention. And if a serious effort requires 10 FTEs, it's even more tricky...
1. noted! I think it is worth trying leaving that style of thumbnail for youtube. it is valuable signal that you're less willing to share it in some channels.
2/3. Not as far as I'm aware, but maybe Ivan knows of one. I'll share this comment with him. If one wanted to try this with a "live" feed to see how it parses real discourse, one could do it as a farcaster client since it's an open protocol. Maybe this guide would be helpful: https://blog.thirdweb.com/guides/build-a-farcaster-client/
This is really wonderful - I've been talking my friends' ears off about this interview all week and sending around this article. Three thoughts:
1. Unfortunately, I think the attention-grabbing thumbnail might be turning people off from an otherwise very high-quality discussion. I think people see a contradiction in my introduction of the topic as "the internet is absolutely desperate to grab your attention no matter the cost and here's what can be done to fix it", but then having a thumbnail that uses the same strategies to draw attention. Ditto with the autoplaying video. I don't have a strong suggestion here, since some people will respond positively to that, but one remedy could be to make another post that's only the transcript with no images or video. Then I could share that on more serious channels and possibly get more serious engagement.
2. Does a recommender system anything like what Ivan described exist? I'd happily pay for a subscription to try it, tell my friends, and give user feedback.
3. For sourcing the actual content that the recommender system chooses from, something like Lemmy could be good for reddit-style text posts and comment trees. And given the mission, maybe companies with existing high-quality content like Nebula and Headspace would be willing to license content at-cost.
> Does a recommender system anything like what Ivan described exist? I'd happily pay for a subscription to try it, tell my friends, and give user feedback.
I mean, doesn't seem like that great idea to me to fry your brain, but <https://arxiv.org/abs/2302.01724> "RLUR (reinforcement learning on user retention) has been fully launched in Kuaishou app for a long time, and achieves consistent performance improvement on user retention and DAU."
Ah, sorry for the confusion, I was referring to the part where Ivan said:
> But we can totally do that now. We can interview them with language models. A language model could ask, "What do you care about? What was the happiest moment of your last month? How could we make that happen more often?" It could just figure that out. The technology is more than ready. And yet, as far as I can tell, no one has built a recommender system that does anything like this.
I guess he directly says that nobody's doing it now. But if it turns out that anyone has this in the works I'd love to try it.
Hey! I've changed the thumbnail for this post, consider sharing it now with the more serious channels you were considering.
At the same time, we're still subject to the pressure that if we want to have a serious forecasting effort, we need 100 people to pay us 1% of their income to have 1FTE of capacity, for which we need 8K people to be paying attention. And if a serious effort requires 10 FTEs, it's even more tricky...
1. noted! I think it is worth trying leaving that style of thumbnail for youtube. it is valuable signal that you're less willing to share it in some channels.
2/3. Not as far as I'm aware, but maybe Ivan knows of one. I'll share this comment with him. If one wanted to try this with a "live" feed to see how it parses real discourse, one could do it as a farcaster client since it's an open protocol. Maybe this guide would be helpful: https://blog.thirdweb.com/guides/build-a-farcaster-client/