Rendered at 03:35:19 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
bottlepalm 20 hours ago [-]
How much money did Y Combinator invest to get that 0.6% stake? I hope it was more than zero. Funny how in 2019 they just start doling out shares in a previously shareless entity.
ucyo 16 hours ago [-]
Sam Altman was president of Y Combinator from 2014 to 2019. of course YC has a stake in OpenAI. My surprise is why it is that low…
JumpCrisscross 15 hours ago [-]
> of course YC has a stake in OpenAI
Sam has worn multiple hats for ages. Not all of this holdings are cross invested. (And they don’t have to be as long as everything is disclosed.)
It’s absolutely material that Sam’s statements about having no equity are, granted unsurprisingly a lie. It’s a bit more surprising to me that Graham would publicly defend Altman without disclosing this bias. And it’s actually shocking to me that Livingston, a journalist, was in on the fix.
dgellow 13 hours ago [-]
From what I see they had 13 founding rounds, that’s quite a lot of dilution to be expected if you were early
financetechbro 14 hours ago [-]
Every time they raise new $, old shareholders get diluted. OAI has raised a lot of money
rvz 24 hours ago [-]
Greg Brockman (President of OpenAI) also said that OpenAI is around 80% close to achieving "AGI", but it was disclosed that his stake in OpenAI is worth around 30BN.
So what does the true definition of "AGI" actually mean? It depends on who you ask.
It appears to many to mean "A Great IPO" or "A Gigantic IPO" at this point rather than "Artificial General Intelligence" which has been clearly hijacked to mean something else.
fodkodrasz 21 hours ago [-]
> So what does the true definition of "AGI" actually mean? It depends on who you ask.
AGI - Automatically Generating Income
FartyMcFarter 20 hours ago [-]
AGI - Ad-generated income.
Galanwe 20 hours ago [-]
> So what does the true definition of "AGI" actually mean?
No worries, there will be a startup creating "AGI Bench", >=80% means you're AGI, they will be valued $50B.
in-silico 19 hours ago [-]
The ARC-AGI benchmark is basically this already
jononor 17 hours ago [-]
That is not at all the intention of the ARC team. By ARC teams definition, passing any single ARC-AGI benchmark does not mean that AGI has been achieved. Instead, AGI would be considered achieved when we are no longer able to come up with new benchmarks that the AI systems do not immediately do well on.
wiseowise 19 hours ago [-]
> So what does the true definition of "AGI" actually mean? It depends on who you ask.
When Greg Brockman makes a lot of money from the deal.
jimnotgym 19 hours ago [-]
First do 80%, then do the remaining 150%, I would imagine
giancarlostoro 21 hours ago [-]
That's the trick right? What do they really mean by AGI. Depending on how narrow you go, it sounds like we've already achieved it. However, if they keep saying they'll achieve it and not defining it before making such statements that determine what it is, they can keep saying it endlessly to create hype.
One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.
I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.
bitexploder 15 hours ago [-]
I think they would not be LLMs then.
giancarlostoro 12 hours ago [-]
Agreed. It feels like LLMs are just a piece of the whole final solution towards AGI. I do foresee possibly seeing "LLM flavored AGI" where it does all those things, via tool calling, RAG and other techniques. The real AGI in my eyes will be more than just an LLM though.
wg0 24 hours ago [-]
> So what does the true definition of "AGI" actually mean?
If your stake is > 30 billion seems more of a reasonable and realistic criteria to me.
christkv 21 hours ago [-]
AGI is defined as whatever it takes for stock holders to make $$$ I guess?
avazhi 21 hours ago [-]
One of the random tidbits I can remember from the New Yorker Altman deep dive was Brockman being obsessed about making $1B dollars. It was memorable because I actually cringed reading it.
lukan 22 hours ago [-]
""Artificial General Intelligence" which has been clearly hijacked to mean something else."
I mean, the goalposts shifted. The game Go used to be considered to require true AI. Passing the turing test. Scanning, analyzing and improving complex codebases largely on their own would have been considered some sort of AGI by me 6 years ago.
Now sure, we all know they lack true understanding. But it gets blurry at times what that does mean.
But I don't buy that there will be a magic point, where self improving AGI explodes towards singularity. The current approach is very, very energy and compute intense and that is unlikely to change.
sevenzero 21 hours ago [-]
Maybe the dystopian AI development will result in energy funding and advancements that actually benefit most of us. I really hope all this turns out in a net positive for humanity. If we wont get true "AGI", which we are far far away from, we at least could make some advancements in different areas.
lukan 21 hours ago [-]
Well, I surely hope so, but I feel less positive if that means a nuclear power plant is parked before every new rushed datacenter
But in general I do believe AI has the potential to be a great positive for humanity on its own - if the open models stay strong and not only a few people control them.
sevenzero 21 hours ago [-]
I can see your reasoning. Unfortunately I see and experience everything wrong with AI in my daily life. People ask it what gifts to buy for their loved ones or use it as therapist substitute. Humans are not ready for this technology. A lot of us are even losing the ability to read properly (even though thats related to technology in general). It's extremely scary. The only advantage humans have is an extraordinary big brain and a pair of thumbs, we can't afford to use our brains less.
lukan 20 hours ago [-]
I mean, people are doing dumb shit since the beginning of times and I considered this society as messed up since way before LLMs.
And yes, humans as a whole are not even ready for cars or nuclear weapons. We build and used them anyway.
But my brain is still pretty busy and I don't think the younger generation is getting dumber because of LLMs, rather mindless consuming TikTok and co
LLMs are a also great learning tool and anyone using them should know their limits quickly. Not all do, though. That is obvious.
keybored 21 hours ago [-]
News at Y Combinator used to be my preferred reading diversion: reading interesting technical stories, debates on political topics, learning things, my comfort food of the same topics repeating the same arguments over the span of a decade. Now it’s that but also 65% AI doomscrolling.
benterix 19 hours ago [-]
Same with me. I found a new hobby: reading pre-LLM HN. It turns out I missed so many interesting projects and discussions. Some are a bit funny in hindsight, some are inspiring.
At the same time, the current version of HN is still usable, you just need to mentally filter LLM-related stuff. It was similar with cryptocurrencies TBH.
nozzlegear 14 hours ago [-]
I just press the "Hide" button under most stories related to AI; it removes the temptation for me to jump into the thread, and surfaces more interesting (i.e. not AI bullshit) submissions.
You could probably automate it with a browser extension and a regex that looks for words like "AI", "LLM", and the names of any popular companies or projects.
big_toast 9 hours ago [-]
The AI bothered me less, but I got a little frustrated with less than substantive comments on the front page.
Oddly I made an extension* to use the site more the way I wanted and now I find it a little easier to get a higher SNR past the front page and am enjoying that. I didn't really get past post rank 60 for two decades and now generally get much further.
*(It's basically vim-keys support for basically two functions. A function to "highlight" stories/comment threads I think will be promising and then hide function for the rest.)
globalnode 22 hours ago [-]
i always thought there were two reasons for AI interest on HN.
1. since AI has captured the imagination of capitalists and they think this is the next industrial revolution, they gotta be in it to win it. combined with the fact that i believe most people here are wealthy or at least aspirationally so, that explained half of it.
2. the other half is that AI as a tech is interesting from a mathematical and compsci point of view, tho certainly not interesting enough to justify the proportion of topics about it here.
i guess i should add a 3rd reason.
3. ycomb has a financial stake in spreading the news about how wonderful this tech is!
lolol
tomhow 22 hours ago [-]
The only thing that should be surprising to anyone who knows about the early history of OpenAI is how little of it YC owns, given how much it leveraged YC’s credibility to get started (early employees joined an institution called “YC Research”, operating from YC’s office space). Once that stake is divided up among all the LPs and small unit holders, it’s not a huge outcome.
Also: nothing gets sustained attention on HN unless good hackers find it interesting. Our entire objective is to be the website that attracts the best hackers, serves them the most interesting content and facilitates the most interesting discussions. That can’t happen if we’re nefariously pushing a commercial agenda.
robocat 20 hours ago [-]
Rhymes with reddit.com at IPO:
- Sam Altman ~9%
- YCombinator had <5%
- Steve Huffman ~3% Although he had ~4% voting power via Class B shares.
- Alexis Ohanian: Minimal
- Advance Publications: ~30%
- Tencent: ~11%.
The original founders (Steve Huffman and Alexis Ohanian) massively diluted when they sold Reddit to Advance Publications in 2006 for $10 to $20 million.
Even if ycombinator doesn’t have ownership in OpenAI, they do have ownership in a lot of AI startups and would still be incentivized to spread AI news
an0malous 13 hours ago [-]
The interest in AI is global and spans nearly every corner of the Internet, it’s not something exclusive to HN. The root cause of this is #1 by a wide margin. Our society is governed by money, the investor class sees an opportunity to become trillionaires, the labor class is afraid of becoming the permanent underclass, all of these things are defined by money.
tim333 12 hours ago [-]
It also can give insights into natural intelligence.
ValentineC 20 hours ago [-]
One more (for me, and definitely for many others since I've seen similar posts):
It's letting me build stuff I probably wouldn't be able to build by myself without raising lots of money for way cheaper, at least until GitHub Copilot gets incredibly nerfed next month.
globalnode 17 hours ago [-]
sorry everyone, sometimes i go down these rabbit holes
greggsy 19 hours ago [-]
…or many people are using the products day to day in their work as IT professionals or developers?
I think it’s mostly the above, rather than a capitalist conspiracy, or in its relevance as a scientific curiosity.
aurareturn 13 hours ago [-]
I'll present an alternative set of reasons:
1. AI is tremendously useful at the current intelligence level and people here like to be more productive.
2. AI is exciting - both in the potential applications and new models getting smarter.
3. Many workers here have either transitioned to building agents or they're heavily using AI for their work.
FergusArgyll 23 hours ago [-]
"well-known AI expert Gary Marcus"
tomhow 20 hours ago [-]
Please don’t post snark on HN. Gary is, objectively, an AI expert. He’s been a leading researcher for decades and sold an AI company to Uber. He obviously sees things differently from the current generation of AI company leaders and has concerns about the direction of the AI industry. That doesn’t mean it’s fine to disrespect someone like this here. The first rule of the “In Comments” section of the guidelines is be kind.
And "toughness, adaptability, and determination"
>>> "ambition", frankly
chis 24 hours ago [-]
Such suspicious phrasing lol. So you’re saying Paul Graham and his wife Jessica have 800 MILLION dollars worth of OpenAI stock, and that’s not so significant?
oliculipolicula 22 hours ago [-]
We're forced to decide whether 0.8B is enough to risk her credibility over, or, if it matters to us, gather more information first
anewhnaccount2 22 hours ago [-]
Exactly! It's only $0.0008T. Pocket change really...
JumpCrisscross 15 hours ago [-]
Has The Information broken any critical news about OpenAI? I never connected the dots around why I started finding it increasingly in worth paying for over the last year or two, but editorial bias feels correct.
crowcroft 24 hours ago [-]
What is 0.1% of a trillion? I think that's quite a large number still.
clickety_clack 23 hours ago [-]
Only a Sith deals in absolutes...
bitmasher9 24 hours ago [-]
OpenAI’s last post-money valuation was less than a trillion. They’ll probably cross that point in the future, but let’s not get ahead of ourselves.
kibibu 24 hours ago [-]
It was $852 billion - 0.1% of which is $852 million
epolanski 21 hours ago [-]
Can only buy a luxury mega yacht, few mansions and private jets, but let's be real, after this you're lucky if you're left with just enough to buy yourself an European football club.
hhh 22 hours ago [-]
85.2m*
crowcroft 22 hours ago [-]
Are you sure?
kibibu 24 hours ago [-]
Does Paul Graham no longer have a stake in Y Combinator?
1 days ago [-]
8ig8 1 days ago [-]
Seems to be an unusually quiet post for something posted 3 hours ago.
roxolotl 1 days ago [-]
My understanding is dang has said in the past they do some anti moderation(I’m sure he has a better term) for posts related to ycombinator. That is to say they moderate less and might, do not quote me here, even boost a tad. So upvoted story by a well reputed source even without many comments is likely to hang onto the front page for a bit.
"Less" doesn't mean "not at all", of course—that would be too big a loophole. But it does mean strictly less, and we stick to that, despite its various downsides, because the upside is bigger.
In the present case, it means we haven't applied any moderation downweights to this post, even though it's obviously the sort of thing we would downweight under other circumstances, since it's neither particularly substantive nor intellectually interesting (though it could be some other kind of interesting, at least to some readers).
pdpi 23 hours ago [-]
The actual content of the post is straightforward and not particularly novel — YC has a stake in OpenAI, that creates a conflict of interest, and the New Yorker is negligent (in the informal sense) for not putting that in their piece.
It’s a sobering reminder and worthy of being on the front page on that basis alone, but I don’t see much of a discussion to be had. “Unusually quiet for a front page post” is probably where this post is meant to be.
gyomu 23 hours ago [-]
> not particularly novel
As far as I know this is the first time anyone has publicly claimed to know, quoting insider sources, what YC's actual stake in OpenAI is.
15 hours ago [-]
iambateman 1 days ago [-]
Do you have something to say about it?
wg0 24 hours ago [-]
Nothing unusual. There's not an AI company (mostly AI wrappers) on planet in which Y Combinator hasn't sprinkled their cash already.
I'd go as far to say that it's impossible at this point to form an AI company without YCombinator not investing in it.
applfanboysbgon 23 hours ago [-]
You would be incorrect.
Vandit296 20 hours ago [-]
I disagree there tons of early stage investors who even invest before YC you can find them on OpenVC
geuis 22 hours ago [-]
Could someone (non-AI) summarize this? I'm sorry but I just literally don't have time to even read long posts from very reputable sources. I know I need the info but time just isn't there in my life right now.
FabHK 21 hours ago [-]
Ronan Farrow and Andrew Marantz had a critical investigative report in The New Yorker on Sam Altman and OpenAI last month asking whether Altman could be trusted.
Paul Graham of Y-Combinator in response tweeted some positive things about Altman, emphasising that they didn't fire him as CEO of YC (though not going as far as declaring him trustworthy).
Now John Gruber of DaringFireball (an Apple blog) added context by claiming that YC owns a 0.6% stake in OpenAi, worth around $5bn, which might colour Graham's judgement.
dgellow 13 hours ago [-]
Just skip and ignore if you don’t have the time, you likely have more important things to do
pixel_popping 19 hours ago [-]
why non-AI? If AI is arguably great at something, it's this.
Sam has worn multiple hats for ages. Not all of this holdings are cross invested. (And they don’t have to be as long as everything is disclosed.)
It’s absolutely material that Sam’s statements about having no equity are, granted unsurprisingly a lie. It’s a bit more surprising to me that Graham would publicly defend Altman without disclosing this bias. And it’s actually shocking to me that Livingston, a journalist, was in on the fix.
So what does the true definition of "AGI" actually mean? It depends on who you ask.
It appears to many to mean "A Great IPO" or "A Gigantic IPO" at this point rather than "Artificial General Intelligence" which has been clearly hijacked to mean something else.
AGI - Automatically Generating Income
No worries, there will be a startup creating "AGI Bench", >=80% means you're AGI, they will be valued $50B.
When Greg Brockman makes a lot of money from the deal.
One key thing I've heard about AGI which I think would be the most determining factor for me is a model that learns on the fly. Which could be done one way or another, but when you consider that LLMs basically run like "ROM" files, it makes it a little complicated.
I think we need to re-imagine how LLMs are built, train, and run. But also, figure out how to drastically lower the cost of running them.
If your stake is > 30 billion seems more of a reasonable and realistic criteria to me.
I mean, the goalposts shifted. The game Go used to be considered to require true AI. Passing the turing test. Scanning, analyzing and improving complex codebases largely on their own would have been considered some sort of AGI by me 6 years ago.
Now sure, we all know they lack true understanding. But it gets blurry at times what that does mean.
But I don't buy that there will be a magic point, where self improving AGI explodes towards singularity. The current approach is very, very energy and compute intense and that is unlikely to change.
https://www.scmp.com/news/china/science/article/3351721/chin...
But in general I do believe AI has the potential to be a great positive for humanity on its own - if the open models stay strong and not only a few people control them.
And yes, humans as a whole are not even ready for cars or nuclear weapons. We build and used them anyway.
But my brain is still pretty busy and I don't think the younger generation is getting dumber because of LLMs, rather mindless consuming TikTok and co
LLMs are a also great learning tool and anyone using them should know their limits quickly. Not all do, though. That is obvious.
At the same time, the current version of HN is still usable, you just need to mentally filter LLM-related stuff. It was similar with cryptocurrencies TBH.
You could probably automate it with a browser extension and a regex that looks for words like "AI", "LLM", and the names of any popular companies or projects.
Oddly I made an extension* to use the site more the way I wanted and now I find it a little easier to get a higher SNR past the front page and am enjoying that. I didn't really get past post rank 60 for two decades and now generally get much further.
*(It's basically vim-keys support for basically two functions. A function to "highlight" stories/comment threads I think will be promising and then hide function for the rest.)
1. since AI has captured the imagination of capitalists and they think this is the next industrial revolution, they gotta be in it to win it. combined with the fact that i believe most people here are wealthy or at least aspirationally so, that explained half of it.
2. the other half is that AI as a tech is interesting from a mathematical and compsci point of view, tho certainly not interesting enough to justify the proportion of topics about it here.
i guess i should add a 3rd reason.
3. ycomb has a financial stake in spreading the news about how wonderful this tech is!
lolol
Also: nothing gets sustained attention on HN unless good hackers find it interesting. Our entire objective is to be the website that attracts the best hackers, serves them the most interesting content and facilitates the most interesting discussions. That can’t happen if we’re nefariously pushing a commercial agenda.
- Sam Altman ~9%
- YCombinator had <5%
- Steve Huffman ~3% Although he had ~4% voting power via Class B shares.
- Alexis Ohanian: Minimal
- Advance Publications: ~30%
- Tencent: ~11%.
The original founders (Steve Huffman and Alexis Ohanian) massively diluted when they sold Reddit to Advance Publications in 2006 for $10 to $20 million.
Numbers above are vaguely accurate. See https://www.untaylored.com/post/who-owns-reddit
It's letting me build stuff I probably wouldn't be able to build by myself without raising lots of money for way cheaper, at least until GitHub Copilot gets incredibly nerfed next month.
I think it’s mostly the above, rather than a capitalist conspiracy, or in its relevance as a scientific curiosity.
1. AI is tremendously useful at the current intelligence level and people here like to be more productive.
2. AI is exciting - both in the potential applications and new models getting smarter.
3. Many workers here have either transitioned to building agents or they're heavily using AI for their work.
https://news.ycombinator.com/newsguidelines.html
Jessica Livingston's personal stake in OpenAI is maybe at most 0.1% or less and Paul Graham's, afaik, is 0.
So the bias doesn't seem as large as OP thinks
*https://xcancel.com/paulg/status/2041366050693173393
And "toughness, adaptability, and determination" >>> "ambition", frankly
"Less" doesn't mean "not at all", of course—that would be too big a loophole. But it does mean strictly less, and we stick to that, despite its various downsides, because the upside is bigger.
In the present case, it means we haven't applied any moderation downweights to this post, even though it's obviously the sort of thing we would downweight under other circumstances, since it's neither particularly substantive nor intellectually interesting (though it could be some other kind of interesting, at least to some readers).
It’s a sobering reminder and worthy of being on the front page on that basis alone, but I don’t see much of a discussion to be had. “Unusually quiet for a front page post” is probably where this post is meant to be.
As far as I know this is the first time anyone has publicly claimed to know, quoting insider sources, what YC's actual stake in OpenAI is.
I'd go as far to say that it's impossible at this point to form an AI company without YCombinator not investing in it.
Paul Graham of Y-Combinator in response tweeted some positive things about Altman, emphasising that they didn't fire him as CEO of YC (though not going as far as declaring him trustworthy).
Now John Gruber of DaringFireball (an Apple blog) added context by claiming that YC owns a 0.6% stake in OpenAi, worth around $5bn, which might colour Graham's judgement.