Bestgamingpro

Product reviews, deals and the latest tech news

Editors of prestigious science journals claim that ChatGPT “lacked depth and insight”

The huge potential of OpenAI’s ChatGPT software for producing natural-sounding responses to human questions has been the subject of much recent discussion.

The system’s flaws, such as the proliferation of plain lies, have also been discussed at length in print. They (the programmes) cannot assume responsibility for the content and integrity of scientific studies, according to a study in Nature, thus they do not fulfil the standards for authorship of academic works. Some academics have found another, more nuanced drawback to ChatGPT: even when the software is correct, its output may have a bland, excessively broad, uninformative tone.

Stem Cell Reports, a publication of the colossal scientific publisher Cell Press, found this out when editors Barbara Treutlein and Patrick Cahan utilised ChatGPT to assist them write an editorial. On January 10th, 2018, the editorial was published on the website.

The editors askedChatGPT a variety of foundational questions concerning computational systems biology. A key goal was to illustrate “how breakthroughs in computing… may aid practitioners throughout the larger stem cell research enterprise: by reducing time, thereby enabling us to undertake more research!”

The authors published, with some changes made, the ChatGPT questions and the program’s replies. One such query and response is as follows:

Prompt:> What are the major challenges in using computational techniques to address questions in stem cell biology?

“ChatGPT:> The need for large amounts of data: To accurately model stem cell behavior, researchers need large amounts of data about the behavior of individual cells and the interactions between different cell types. This can be challenging to collect and process, especially when dealing with large numbers of cells.”

In their article, Treutlein and Cahan note that ChatGPT was “useful in articulating how computational systems biology would contribute in stem cell research and in identifying certain difficulties,” but they also note that “generally, many of the comments lacked depth and understanding.”

As a follow-up question, the editors want to know how to identify relevant responses.

Cahan responded through email with an annotated collection of ChatGPT queries and replies that, he argued, indicated “a striking lack of depth and intelligence” on the part of the software.

Cahan said, “In overall, I am really pleased with the capabilities of this instrument,” despite its limitations. Cahan claims he has put ChatGPT through its paces by having it develop code to solve some of the problems from his computational stem cell biology course.

“It works (not perfectly), and it records the code,” Cahan said. I believe it has a great capacity for summarising and synthesising data, and I can’t wait to see what comes of its adaptation for biological literature.

Due to their obvious inadequacies, Cahan and Treutlein omitted the following questions and their responses from their editorial. Cahan marked up each with his own thoughts on why he thinks ChatGPT’s answers are inadequate.

Prompt: What are the major challenges in using computational techniques to answer questions in the field of stem cell biology?

Response: There are several major challenges in using computational techniques to answer questions in the field of stem cell biology. Some of the key challenges include:

* The complexity of stem cell behavior: Stem cells are highly versatile cells that can differentiate into a wide range of cell types. Understanding the mechanisms underlying their behavior and differentiation is extremely complex and can be difficult to study using computational methods.

Patrick Cahan: This is actually precisely an area where computational models should shine because they can find patterns in complex data.

[ChatGPT response continues]

* The need for large amounts of data: To accurately model stem cell behavior, researchers need large amounts of data about the behavior of individual cells and the interactions between different cell types. This can be challenging to collect and process, especially when dealing with large numbers of cells.

Patrick Cahan: Again, the large data size is an area where computational systems biology should outperform standard techniques. ChatGPT seems to be conflating data collection with data processing here.

[ChatGPT response continues]

* The limitations of current computational models: Despite advances in computational biology and systems biology, current models are still limited in their ability to accurately capture the complexity of stem cell behavior. This can make it difficult to generate reliable predictions and insights from computational studies.

Patrick Cahan: This is an example of lack of depth. If we had prompted for specific examples, it might have been possible to elicit ChatGPT  1.) to give examples of specific biological phenomenon that elude current computational models and 2.) to speculate why this is the case.

[ChatGPT response continues]

* The difficulty of translating computational results into practical applications: Even if computational models accurately capture the behavior of stem cells, it can be difficult to translate these results into practical applications, such as the development of new therapies or treatments.

Patrick Cahan: This is true, but it is not a challenge that is unique to computational applications. So, like the second point, ChatGPT seems to be slightly misunderstanding the question. Perhaps with further clarification or prompt engineering on our part, we could have elicited a more satisfying response.

[ChatGPT response continues]

Overall, these challenges highlight the need for continued research and development in the field of computational biology and systems biology to improve our ability to study and understand stem cell behavior.

Patrick Cahan: We see lots of this type of generic summarization that adds no new information to the overall ChatGPT output.