Romeo and Juliet via ChatGPT

This content has been archived. It may no longer be accurate or relevant.

Here’s a question PG put to ChatGPT, one of the leading text AI programs:

Did Romeo and Juliet have to die? Couldn’t they have married and enjoyed a happy life?

Here’s the answer to PG’s question from the AI:

Romeo and Juliet is a fictional play written by William Shakespeare, first performed in the 16th century. The tragic ending of the play is an essential part of the plot and serves to highlight the consequences of the feud between the two families.

Although it is possible to imagine a different ending for the story, where Romeo and Juliet marry and live happily ever after, it would fundamentally alter the play’s themes and message. The tragic ending is what makes the play so powerful and enduring.

In the context of the play, Romeo and Juliet’s death is the result of a tragic set of circumstances, including their families’ long-standing feud, the lack of communication between them, and the tragic misunderstanding about Juliet’s supposed death. These factors combined to create a tragic outcome that could not have been avoided.

Ultimately, the play’s message is not about whether Romeo and Juliet could have lived happily ever after, but about the destructive power of hatred and violence, and the need for love and compassion to overcome these forces.

7 thoughts on “Romeo and Juliet via ChatGPT”

  1. That throat-clearing first sentence is characteristic of ChatGPT, and completely unnecessary. Who, upon receiving that question, thinks that the questioner needs this information? I wonder if this verbal tic will be removed. As it is, it just screams “ChatGPT!”

    Also, an actually good answer would include that Romeo & Juliet has in fact been adapted multiple times to have a happy ending. Here is a recent example: https://stpetecatalyst.com/romeo-juliet-get-a-happy-ending-at-american-stage/ As it is, we get a high school English paper.

      • It depends on what you are looking for. The answer is “valid” in a sense, but surely omitting immediately relevant information doesn’t make for a good answer. To put it another way, if this were in fact a high school English assignment, which would be better: the answer Bing gave, or one that includes those happy ending productions?

        And, as I note in other comments, move from questions with soft, squishy answers to questions with right and wrong answers and the entire project falls apart.

    • Sure, but how about finding a concise answer. With Google?

      Try this:

      https://www.windowscentral.com/software-apps/bing-chat-is-what-finally-made-me-ditch-google-search-and-not-look-back

      Anybody with Google fu can find that info after a few tries but that’s not a universal skill. Bing coughed it up right away.

      I tried Bing a couple days ago. I asked it “where do blonde jokes come from”.
      No need to iterate or weed through dozens or relevant SEO-promoted links.
      I got the answer and its source link in seconds. (There was a whole list of othervlinks in case I wanted to use them but the first did the job.)

      For something serious I could and would dig further to validate the reply but for a casual query it’s fine. The mechanism is the key for now, confidence for more serious queries will buikd up over time.

      For now it is adequate and as Adam Osbourne used to say about another emerging technology: “Adequacy is sufficient.”

      • I am very unimpressed by Bing chat. Give it a question on a subject you know well, and where the answer is either right, wrong, or insufficient: not some fuzzy “give me a recipe for [X]” type of request. My standard test question is “How does the infield fly rule work?” Here is the answer it just gave me:

        “The infield fly rule is a rule of baseball and softball that treats certain fly balls as though caught, before the ball is caught, even if the infielder fails to catch it or drops it on purpose. The umpire’s declaration of an infield fly means that the batter is out (and all force plays are removed) regardless of whether the ball is caught. The infield fly rule is a judgment call, as the rule states that “The judgment of the umpire must govern”. The rule directs the umpire to declare an infield fly immediately on determining that the play meets the criteria described above, solely based on the umpire’s discretion. Runners may advance “after the ball is touched”. This rule governs the tag up if the infield fly is caught. The runner does not need to wait on base until the fielder achieves full control of the ball. There is no need to tag up at any time if the ball is dropped.12”

        That answer is not good. The second half, about when a player on base can run, is confusing at best, and does not include the possibility of the ball dropping untouched. But the glaring problem is that we are not told what “certain fly balls” activate the rule. It has not in fact adequately answered the question.

        Here is another one: “How does the dropped third strike rule work?”

        “The dropped third strike rule in baseball is when a hitter strikes out, but the catcher fails to catch the pitch in the air. When the ball hits the ground on a third strike, the hitter is allowed to run to first base. If the hitter safely makes it to first base, no out is awarded to the defense. This rule applies to all levels of baseball, from Little League to the Major Leagues12”

        This is incomplete in two ways. No out is awarded if the batter safely reaches first base, but does the batter get to stay there, or does he go back to the dugout and no out given? We are not told. The more serious omission is that there is no hint that there are circumstances where the dropped third strike rule does not apply: if there is a runner on first base and fewer than two outs. This answer is simply wrong, in that if you follow it you will get the wrong outcome.

        In both cases there are those tiny reference links, which take you to better answers. But at that point what we have is a really slow search engine with no previews for the links. This is an improvement?

        The difference is that Google mostly gives links, not answers. That the user must use their own judgment is obvious. Bing gives answers, and unreliable ones at that.

        • Here’s another one: “What was Bobby Mathews’s ERA in 1884?” This is a straightforward factual question with a correct answer: 3.32. I asked Bing Chat this question a while back, and it was complete stumped. I tried it again just now. It came back with this:

          “I’m sorry, I couldn’t find the exact ERA of Bobby Mathews in 1884. However, I found that he played for the Boston Red Stockings in 1884 and led the league in strikeouts. He started 47 games, for a 25-18 won-lost record, to Fisher’s 11 starts1”

          Now it throws out random facts that weren’t what was asked. The interesting thing is that they all wrong. Mathews was with the Philadelphia Athletics in 1884, where he was eighth in the league in strikeouts. That 47 games for 25-18 was his record in 1872 with Baltimore. That mysterious reference to “Fisher” presumably is Cherokee Fisher, who was out of baseball by 1884 but was Mathews’s teammate in 1872.

          I see three problems here. The simple one is that the AI can’t read a table. The information is readily available, but not if a table is an inscrutable mystery. The second is the impulse to wander off in a different direction. The third, and largest, is the bullshit problem: “hallucinations,” if you demand the more polite, if less accurate, word. This is bad enough if marketed as “chat.” It is disastrous when marketed as “search.”

          A better answer, given that it can’t read tables, would have been “I can’t tell you what was Bobby Mathews’s ERA in 1884, but you should be able to find it here: https://www.baseball-reference.com/players/m/mathebo01.shtml.” Type that question into Google and this is the second link, after the SABR bio, and it comes up quickly.

Comments are closed.