Getting Down to Business

From Medium:

The creation of content is a noteworthy use of AI in book writing. Advanced algorithms for natural language processing enable AI systems to generate text that is both logical and appropriate for the given context. AI is being investigated by publishers and authors more frequently to help with book drafting, editing, and even section generation.

Thanks to large datasets, artificial intelligence algorithms are able to identify patterns in writing styles, themes, and structures. This facilitates the production of content that conforms to particular genres or emulates the traits of well-known writers. AI-generated literature may raise questions about its authenticity, but some purists see it as an additional creative tool to human creativity.

There are more and more instances of AI and human authors working together. AI is a useful tool for writers as it can help with character development, story twist suggestions, and idea generation. The creative process is improved by this cooperative approach, which makes use of the advantages of both machine efficiency and human inventiveness.

. . . .

But using AI to write books also brings up philosophical and ethical issues. Can a machine really understand the subtleties of culture, the depth of storytelling, or the complexities of human emotions? Even though AI systems are capable of producing text and copying styles, true creativity and emotional connection are frequently derived from the human experience.

Notwithstanding the progress made, there is still continuous discussion about AI’s place in book writing. Preserving the genuine voice of human authors and the breadth of human experiences is a delicate balance that demands careful consideration, even though it surely offers efficiency and creative possibilities.

In summary, the connection between artificial intelligence and book writing is quickly changing. Automation improves productivity, offers opportunities for collaboration, and provides data-driven insights, but it also raises questions about what makes human creativity truly unique. As technology develops further, the future of literature will be shaped by striking the correct balance between the benefits of artificial intelligence (AI) and the inherent qualities of human storytelling.

Link to the rest at Medium

PG noted the portion of the last paragraph of the OP that talked about “the inherent qualities of human storytelling.”

While that portion of the OP certainly caused PG to feel warm and fuzzy for a few moments, retired lawyer PG butted in with a question about what “the inherent qualities of human storytelling.” actually are.

Certainly, “the inherent qualities of human storytelling” are not manifested equally across the breadth of humanity. Some people are better storytellers than other people are. Some people are great at telling stories in print and others are great at telling stories on the stage or with a movie or television camera pointed at them and, on relatively rare occasions, some people are good at storytelling in multiple media.

For a motion picture, script-writing storytellers are involved, acting storytellers are involved, directing storytellers are involved, etc. We’ve already seen successful motion pictures like The Matrix and 2001: A Space Odyssey where the non-human acting storytellers play a key role or, in every Disney cartoon movie, where there are no human actors, all the roles.

As far as human emotions are involved, are there a lot of people who didn’t shed a tear when Bambi’s mother was killed by a hunter?

PG notes that AI foundational research has been going on for a long time. (More on that in a separate post to appear on TPV in the foreseeable future.)

However, the widespread use of AI systems by a lot of people is a relatively recent phenomenon that requires software and hardware sufficient to respond to a flood of prompts from a great many people at the same time. Hosting an AI program available to all comers today requires a lot of computing power on the scale of very large cloud computing services like Amazon Web Services, Microsoft’s Azure, and the Google Cloud Platform.

However, the history of modern computer development has been a nearly steady stream of smaller, cheaper, and more powerful devices. A couple of online reports claim your Apple Watch has over twice the computing power as a Cray-2 supercomputer did in 1985.

There is no guarantee that your next cell phone will equal the computing of a group of giant cloud computer systems in the next couple of years, but Moore’s Law says it’s only a matter of time

Moore’s Law is the observation that the number of transistors on an integrated circuit will double every two years with minimal rise in cost. Intel co-founder Gordon Moore predicted a doubling of transistors every year for the next 10 years in his original paper published in 1965. Ten years later, in 1975, Moore revised this to doubling every two years. This extrapolation based on an emerging trend has been a guiding principle for the semiconductor industry for close to 60 years.

Intel Newsroom

PG suggests that opinions about the ability of AI systems to generate book-length stories that many people will pay for are likely to be revised in the future.

As always, feel free to share your thoughts in the comments.

2 thoughts on “Getting Down to Business”

  1. while the transistors thing is true, your not going to use your watch for weather predictions.

    Moores law was true for 60 years and even Moore said when he predicted it he did not expect it to remain true for as long as it did.

    But he missed one thing, that fab cost would also double and at this point the wave has mostly passed. 2 years is now 4 to 6 years it may even be 10 years, I am not up to date on the precise numbers.

  2. Cray vs iPhone13:

    https://m.youtube.com/watch?v=MRPYi9pLaeQ&pp=ygUOY3JheSB2cyBpcGhvbmU%3D

    Mind you, I’m pretty sure the kind of software Crays were used with doesn’t run on smart phones or smart watches. That’s what PCs are for. 😉

    For that matter, there are way more practical ($$$$) uses for AI than stringing narratives together, and will remain so indefinitely. There is no reason to fret about its outputs. Really.
    Just because something is theoretically possible doesn’t mean it is economically viable.

Comments are closed.