Size of LLMs won’t matter as much moving forward

This content has been archived. It may no longer be accurate or relevant.

From TechCrunch:

When OpenAI co-founder and CEO Sam Altman speaks these days, it makes sense to listen. His latest venture has been on everyone’s lips since the release of GPT-4 and ChatGPT, one of the most sophisticated large language model-based interfaces created to date. But Altman takes a deliberate and humble approach, and doesn’t necessarily believe that when it comes to large language models (LLM), that bigger is always going to be better.

Altman, who was interviewed over Zoom at the Imagination in Action event at MIT yesterday, believes we are approaching the limits of LLM size for size’s sake. “I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways,” Altman said.

He sees size as a false measurement of model quality and compares it to the chip speed races we used to see. “I think there’s been way too much focus on parameter count, maybe parameter count will trend up for sure. But this reminds me a lot of the gigahertz race in chips in the 1990s and 2000s, where everybody was trying to point to a big number,” Altman said.

As he points out, today we have much more powerful chips running our iPhones, yet we have no idea for the most part how fast they are, only that they do the job well. “I think it’s important that what we keep the focus on is rapidly increasing capability. And if there’s some reason that parameter count should decrease over time, or we should have multiple models working together, each of which are smaller, we would do that. What we want to deliver to the world is the most capable and useful and safe models. We are not here to jerk ourselves off about parameter count,” he said.

Altman has been such a successful technologist partly because he makes big bets, and then moves deliberately and thinks deeply about his companies and the products they produce — and OpenAI is no different.

“We’ve been working on it for so long, but it’s with gradually increasing confidence that it’s really going to work. We’ve been [building] the company for seven years. These things take a long, long time. I would say by and large in terms of why it worked when others haven’t: It’s just because we’ve been on the grind sweating every detail for a long time. And most people aren’t willing to do that,” he said.

When asked about the letter that requested that OpenAI pause for six months, he defended his company’s approach, while agreeing with some parts of the letter.

“There’s parts of the thrust [of the letter] that I really agree with. We spent more than six months after we finished training GPT-4 before we released it. So taking the time to really study the safety model, to get external audits, external red teamers to really try to understand what’s going on and mitigate as much as you can, that’s important,” he said.

But he believes there are substantial ways in which the letter missed the mark.

“I also agreed that as capabilities get more and more serious that the safety bar has got to increase. But unfortunately, I think the letter is missing most technical nuance about where we need to pause — an earlier version of the letter claimed we were training GPT-5. We are not and we won’t be for some time, so in that sense, it was sort of silly — but we are doing other things on top of GPT-4 that I think have all sorts of safety issues that are important to address and were totally left out of the letter. So I think moving with caution, and an increasing rigor for safety issues is really important. I don’t think the [suggestions in the] letter is the ultimate way to address it,” he said.

Altman says he’s being open about the safety issues and the limitations of the current model because he believes it’s the right thing to do. He acknowledges that sometimes he and other company representatives say “dumb stuff,” which turns out to be wrong, but he’s willing to take that risk because it’s important to have a dialogue about this technology.

Link to the rest at TechCrunch

PG expects that he will not be the only one to have stopped for a millisecond at the LLM in the title. In this context.

The title refers to a newer LLM, meaning Large Language Model.

In the United States, at least, an LLM is also what some schools of law formerly (and some may still do) award their graduates, typically after three years of law school following a four-year Bachelor of Arts or Bachelor of Science degree in just about anything.

In the US, The alternative is a JD, Juris Doctor, although some US law schools give LLB – Bachelor of Laws – at the end of three years of study.

PG did a bit of research and found that, in some US law schools, an LLM is also a one-year degree available to non-US attorneys and which allows them to take the bar exam in the US. (PG thinks taking and passing a bar exam is still mandatory prior to practicing law in a given jurisdiction.)

To further complicate the various rites of passage, the states of California,,Vermont, Virginia, and Washington,an applicant who has not attended law school may take the bar exam after reading law under a judge or practicing attorney for an extended period of time. The required time varies from jurisdiction to jurisdiction.

Reading the law was once the most common method of becoming a lawyer in the United States, Great Britain and Canada. This was the universal method of becoming a lawyer before law schools were created in those nations. Abraham Lincoln famously became a lawyer by reading the law to prepare himself to enter into politics.

PG understands that in other English-speaking nations, an LLM is something different:

The LL.M. (Master of Laws) is an internationally recognized postgraduate law degree. An LL.M. is usually obtained by completing a one-year full-time program. Law students and professionals frequently pursue the LL.M. to gain expertise in a specialized field of law, for example in the area of tax law or international law. Many law firms prefer job candidates with an LL.M. degree because it indicates that a lawyer has acquired advanced, specialized legal training, and is qualified to work in a multinational legal environment.

In most countries, lawyers are not required to hold an LL.M. degree, and many do not choose to obtain one. An LL.M. degree by itself generally does not qualify graduates to practice law. In most cases, LL.M. students must first obtain a professional degree in law, e.g. the Bachelor of Laws (LL.B.) in the United Kingdom or the Juris Doctor (J.D.) in the United States, and pass a bar exam or the equivalent exam in other countries, such as the Zweites Staatsexamen in Germany. While the general curriculum of the LL.B. and J.D. is designed to give students the basic skills and knowledge to become lawyers, law students wishing to specialize in a particular area can continue their studies with an LL.M. program. Some universities also consider students for their LL.M. program who hold degrees in other related areas, or have expertise in a specific area of law.

Basic information about the Legum Magister or LL.M., degree

4 thoughts on “Size of LLMs won’t matter as much moving forward”

  1. Trivia question:

    Who is the last US Supreme Court justice who “read the law” rather than going to college and then law school, and when was he admitted to the bar? Answer is here (shortened so you can’t guess by mousing over).

Comments are closed.