Hugo Hacker News

How good is Codex?

legutierr 2021-08-19 13:53:57 +0000 UTC [ - ]

Wasn't the whole idea behind OpenAI that it would actually be "open"? Or is the name of the organization now entirely a misnomer?

Not only are they not releasing Codex (and GPT-3), but in order to get access to the API you have to apply for access and be judged against a proprietary set of criteria that are entirely opaque.

Furthermore, I imagine that if you do any innovative work building on top of Codex (or GPT-3) they would control that work product, they would be able to cut you off from accessing your work product at any time if it suits them, and they would be able to build off of your work themselves, co-opting any unique value that you may create.

Why the hell should anyone building an AI business even want to work with them? Sure, it might accelerate your effort right at the beginning, but if you are unable to reproduce your results outside of their platform, you will always be beholden to them.

In a few years will we be reading stories about unfortunate entrepreneurs who had built their businesses on top of OpenAI only to have the rug pulled out from under them, like Amazon sellers whose product was cloned by Amazon Basics, or Twitter clients cut off from the API, or iOS apps made redundant by their core functionality being copied by Apple, or search-driven businesses circumvented by the information cards that Google displays directly in the search results...? Etc, etc.

Am I missing something here?

webmaven 2021-08-19 15:13:22 +0000 UTC [ - ]

OpenAI transitioned from non-profit to for-profit in 2019, took about $1 billion from Microsoft (there has been speculation that this was mostly in the form of Azure credits), and announced that Microsoft would be their preferred partner for commercializing OpenAI technologies: https://openai.com/blog/microsoft/

The name is now a complete misnomer.

There may still be some benefit for researchers to collaborate with them (same as with any of the other corporate research labs), but anyone trying to build a business on non-public APIs should obviously tread carefully.

So, no. You aren't missing anything.

cornel_io 2021-08-19 17:03:51 +0000 UTC [ - ]

> Not only are they not releasing Codex (and GPT-3), but in order to get access to the API you have to apply for access and be judged against a proprietary set of criteria that are entirely opaque.

I have yet to hear of one person who has gotten access without either a) being Twitter-notable in the ML space, or b) using a personal connection to jump the queue (I hit up someone a couple steps removed from OpenAI and got lucky). As far as I can tell they are just collecting email addresses to gauge interest, and are not even evaluating people who cold-apply through their form.

Please correct me if I'm wrong, though, I only know what I've heard within my own network! It's totally possible they're allowing a very very slow trickle of external unconnected people in.

burkaman 2021-08-19 14:06:05 +0000 UTC [ - ]

They started as a non-profit and sort of claimed they would actually be open:

"As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies." - https://openai.com/blog/introducing-openai/

However, a couple paragraphs down might have been a clue to the likely future: "Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI."

Currently they are not really non-profit and mostly working with Microsoft.

intuitionist 2021-08-19 15:09:09 +0000 UTC [ - ]

Many of these people are very good at tax structuring — and IANAL — but it’s pretty hard to believe that what they’re doing with OpenAI is kosher.

3pt14159 2021-08-19 15:09:59 +0000 UTC [ - ]

They needed money in order to compete with Google and Facebook, so they switched the model to "investors make a maximum of 10x return and after that they lose their stake" or something like that.

As for being open, I think keeping potentially dangerous tech private for a while while openly sharing the results of research is prudent. The last thing I want is some AI model goes public then we find a way to generate a bunch of computer viruses or propaganda.

Isinlor 2021-08-19 16:33:41 +0000 UTC [ - ]

I think it's maximum of 100x return. So, it's for profit with practically unlimited return.

> Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.

https://openai.com/blog/openai-lp/

burkaman 2021-08-19 15:36:28 +0000 UTC [ - ]

Agreed that not releasing models might be a good thing, I'm just pointing out that's not how they initially pitched the organization. I wonder if they might have gotten less favorable publicity when they launched if they just said "we're starting an AI company to compete with Google".

As for the business model, there's nothing wrong with it in principle, it's just not what they said they would do. There's no reason a well-funded nonprofit research organization needs to compete with Google and Facebook. They changed their funding model because they wanted to compete, not because they needed to. And it hasn't been very long since they said "Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact." You have to assume they knew from the start that they'd probably want to pivot to a business.

Nullabillity 2021-08-19 15:12:17 +0000 UTC [ - ]

They aren't a non-profit at all anymore: https://openai.com/blog/openai-lp/

burkaman 2021-08-19 15:28:47 +0000 UTC [ - ]

They're governed by a separate OpenAI Nonprofit which is why I said "sort of", but yes it's more profit than non.

poszlem 2021-08-19 13:57:35 +0000 UTC [ - ]

OpenAI is approximately as open as the German Democratic Republic or Democratic People's Republic of Korea were/are democratic.

cryptonector 2021-08-19 15:54:28 +0000 UTC [ - ]

Lenin believed in democracy. That is: "democracy of the elite". That is: democracy in the politburo. That is: democracy among a group of a handful of people. And no more.

By Lenin's definition the all those Democratic People's Republics really were democratic.

What that means for OpenAI, I don't know :/

tyre 2021-08-19 16:02:41 +0000 UTC [ - ]

This is how America was conceived as well. Democracy was not meant to be universal and there were checks placed even on the limited portion of society who could vote.

The electoral college, for example, is set up this way. The idea is that electors—who were not democratically elected—could block a populist candidate.

As we saw in 2016, this check failed.

cryptonector 2021-08-19 17:02:45 +0000 UTC [ - ]

Not the same. Lenin believed in a democracy of not more than 8 or so people. The bolsheviks won the Soviet elections and Lenin still staged a coup.

The American Constitution is an indirect democracy. Direct at the local level, indirect at the Federal level (though eventually by statute and amendment it became direct for Congress).

> As we saw in 2016, this check failed.

Nonsense. That "failure mode" was designed for. It has happened a few times. Working as expected.

iurysza 2021-08-19 14:09:18 +0000 UTC [ - ]

You can auto generate believable fake news with it. This will be a tricky one solve.

kneel 2021-08-19 16:47:43 +0000 UTC [ - ]

Not just news, you can create bot armies that push narratives and influence online discussions.

criticaltinker 2021-08-19 14:36:23 +0000 UTC [ - ]

Yup it's only going to get worse - at least for now, it's difficult for these models to generate long news articles that are coherent.

> mean human accuracy at detecting articles that were produced by the 175B parameter model was barely above chance at ∼52% [...] Human abilities to detect model generated text appear to decrease as model size increases [...] This is true despite the fact that participants spend more time on each output as model size increases [1]

> for news articles that are around 500 words long, GPT-3 continues to produce articles that humans find difficult to distinguish from human written news articles [1]

[1] https://arxiv.org/pdf/2005.14165.pdf

reidjs 2021-08-19 15:16:29 +0000 UTC [ - ]

Solution: stop reading the news. You may as well start now.

legerdemain 2021-08-19 16:37:56 +0000 UTC [ - ]

So what it sounds like is that using Codex to write code is like replacing your computer with an intern: can write plausible-looking code and commit messages, but needs constant close attention from an experienced engineer to stop it from turning into a bug machine.

criticaltinker 2021-08-19 14:16:43 +0000 UTC [ - ]

I find it funny and concerning that you must prompt the model to do an SQL insert 'safely' to avoid injection vulnerabilities. I'm sure someone in the near future will find a way to train models to avoid well known hazards such as the OWASP Top 10 [1].

The field of program synthesis based on NLP models is really starting to heat up - we have OpenAI Codex, GitHub Copilot, and a recent paper [2] from Google Research demonstrating that these same techniques can generate programs which solve mathematical word problems. Here is an example of the latter:

> Prompt: Please, solve the mathematical problem: a and b start walking towards each other at 4pm at a speed of 2 kmph and 3 kmph. They were initially 15 km apart. At what time do they meet? n0 = 4.0, n1 = 2.0, n3 = 15.0.

> Model output (python program):

n0 = 4.0

n1 = 2.0

n2 = 3.0

n3 = 15.0

t0 = n1 + n2

t1 = n3 / t0

answer = n0 + t1

[1] https://owasp.org/www-project-top-ten/

[2] https://news.ycombinator.com/item?id=28217026

FractalHQ 2021-08-19 14:22:24 +0000 UTC [ - ]

Interestingly, GitHub CoPilot is built on Codex.

tylermauthe 2021-08-19 14:55:15 +0000 UTC [ - ]

My guess is the end result of all this "AI" assisted code-generation is that it will have the same impact on the software engineering industry as spreadsheets had on accounting. I also believe that this AI-powered stuff is a bit of a "two-steps forward, one step back" situation and the real innovation will begin when ideas from tools like Louise [1] are integrated into the approach taken in Codex.

When VisiCalc was released departments of 30 accountants were reduced to 5 accountants because of the improvement for individual worker efficiency, however accounting itself remains largely unchanged and accountants are still a respected profession who perform important functions. There's plenty of programming problems in the world that simply aren't being solved because we haven't figured out how to reduce the burden of producing the software; code generation will simply increase the output of an individual software developer.

The same forces behind "no-code" are at work here. In fact I see a future where these two solutions intermingle: where "no-code" becomes synonymous with prompt-driven development. As we all know, however, these solutions will only take you so far -- and essentially only allow you to express problems in domains that are already well-solved. We're just expressing a higher level of program abstraction; programs that generate programs. This is a good thing and it is not a threat to the existence of our industry. Even in Star Trek they still have engineers who fix their computers...

[1] - https://github.com/stassa/louise

mxwsn 2021-08-19 15:10:40 +0000 UTC [ - ]

Good take but I'm not convinced. I would suspect (see epistemic status) that while correctness is easy to reason about and maintain by a single accountant using spreadsheets, since there is a clean mapping from a precise excel api to function, the mapping from natural language to code is problematic. I won a t-shirt in the recent OpenAI codex challenge and found it hard to reason systemically about the behavior and generalization properties of generated code. When the generated code is wrong, it was frustrating on a different level than if I wrote the code myself.

Epistemic status: I don't know anything about accounting

Tarq0n 2021-08-19 15:47:24 +0000 UTC [ - ]

It's not about what Codex can do now, but what the technology will be able to do a few generations into the future. Anticipating this shift in the software labor market is important to many people on HN.

Codex as it stands is just a novelty, but it does show the shape of what's to come.

eggsmediumrare 2021-08-19 15:27:30 +0000 UTC [ - ]

It's not a good thing for those 25 other engineers.

Kiro 2021-08-19 14:55:25 +0000 UTC [ - ]

Is Codex different from the regular code examples on OpenAI, e.g. https://beta.openai.com/examples/default-translate-code or https://beta.openai.com/examples/default-fix-python-bugs

I can do that now with my OpenAI account but Codex needs a specific invite. What's the difference?

smitop 2021-08-19 16:52:26 +0000 UTC [ - ]

Codex is different from GPT-3 in that it is a different model trained specifically on source code, and works better for that. I have access to Codex, when I click on those playground links I see that the engine selected on the right is "davinci-codex" and it says "The Codex models are currently in private beta. Usage is free during this period" at the bottom.

Nullabillity 2021-08-19 15:18:04 +0000 UTC [ - ]

Looks like both of those are using Codex, as far as I can tell?

Kiro 2021-08-19 14:09:55 +0000 UTC [ - ]

I jumped straight to the Conclusion expecting to find... a conclusion.

smitop 2021-08-19 14:17:37 +0000 UTC [ - ]

I actually ended up putting what you would expect in a conclusion in the introduction. The conclusion is actually pretty worthless in hindsight.

pistoriusp 2021-08-19 16:03:47 +0000 UTC [ - ]

Can Codex write itself?

peteretep 2021-08-19 13:57:40 +0000 UTC [ - ]

> Often Codex will write code that looks right at first glance, but actually has subtle errors.

iurysza 2021-08-19 14:07:42 +0000 UTC [ - ]

So... it's like a real human!

PaulHoule 2021-08-19 14:13:17 +0000 UTC [ - ]

It's outright depressing that we have to fight back example by example to show that Codex and similar scams are nowhere near able to imitate the ability of a programmer to code something correctly.

It's degrading of the work that we do for one thing.

treme 2021-08-19 14:51:45 +0000 UTC [ - ]

you are in denial if you are doubting effectiveness of Codex. Keep in mind that this was just beta.

If you saw how GPT-2 improved to GPT-3 in a year, it's easy to see where this is gonna go over next few iterations.

It's a 2007 Iphone level catalyst that's going to dramatically shift the landscape

PaulHoule 2021-08-19 14:57:46 +0000 UTC [ - ]

And GPT-4 will almost work when it is powered by a Dyson sphere and GPT-5 would require more text to train than exists in all the planets of the universe.

All of those things are structurally inadequate to solve the problem in front of them and just make up for it for the same reason ELIZA seemed intelligent... People are willing to believe.

You could put fantastically more resources into that approach and find you're approaching an asymptote. It's the deadliest trap in advanced technology development and it happens when you ignore the first law of cybernetics.

Anyone who's been a practitioner in the software field has experienced that repairing mistakes from a program written by somebody clueless is almost always vastly more expensive than writing it correctly to begin with -- I remember helping a friend "cheat" at CS 101 by stealing somebody else's homework, finding the program was wrong and putting a lot of effort into debugging it and fixing it, never mind changing the symbol names and taking other measures to hide the origin of the program.

It might be my karma, but fixing the program I stole turned out to be excellent preparation for a career in software dev.

scelerat 2021-08-19 16:41:30 +0000 UTC [ - ]

And yet today, the descendants of ELIZA provide first-line customer support for thousands of companies.

qualudeheart 2021-08-19 16:16:45 +0000 UTC [ - ]

Where are you getting these numbers for GPT-4/5 resource requirements from? I’ve heard of diminishing returns happening though not to the degree you’re describing.