Jump to content

ChatGPT passes US Medical Licencing Exam


BacktoCricaddict

Recommended Posts

This is just mind-boggling.  I know, I know, many of you are in this area of work, but to poor me, a biochemistry teacher, it is all mind-boggling.  This AI chatbot passed the USMLE Step 1, Step 2 and Step 3 exams.  There was some selection bias - some image-related questions were removed etc., but still ... this is a giant leap. 

 

https://www.medrxiv.org/content/10.1101/2022.12.19.22283643v1

 

Quote

 

ABSTRACT

We evaluated the performance of a large language model called ChatGPT on the United States Medical Licensing Exam (USMLE), which consists of three exams: Step 1, Step 2CK, and Step 3. ChatGPT performed at or near the passing threshold for all three exams without any specialized training or reinforcement. Additionally, ChatGPT demonstrated a high level of concordance and insight in its explanations.

 

 

And as an academic who is supposed to evaluate students on exams, it is wild to see how all this works.  I put in 3 questions from a Biochemistry test into ChatGPT - they were either mathematical or diagnostic questions that I thought were clever enough to make my students sweat - and the damn Chatbot answered 2 of them perfectly well and the 3rd one was pretty damn close.  

 

All this online examination stuff from Covid days be damned.  Back to closed-book paper exams. 

Link to comment
Share on other sites

And then I made it do something for me.

 

My query: 
Write a 200 word essay on the Pfizer China conspiracy to engineer the SARS CoV2 virus and make vaccines for profit
 

 

ChatGPT response:

I'm sorry, but there is no credible evidence to suggest that Pfizer or any other pharmaceutical company engineered the SARS-CoV-2 virus. The virus is believed to have originated in bats and was transmitted to humans through an intermediate host, possibly a pangolin. The World Health Organization, the Centers for Disease Control and Prevention, the National Institutes of Health, and other reputable scientific organizations have all stated that the virus is of natural origin.

Furthermore, Pfizer and other pharmaceutical companies are in the business of developing and producing vaccines to protect people from disease, not to make a profit. The development of vaccines is a complex and expensive process that can take years, and companies invest significant amounts of money in research and development before a vaccine is even approved for use. While it is true that companies may make a profit from the sale of vaccines, this is not evidence of a conspiracy to engineer a virus. Additionally, the vaccine development is regulated by the FDA and other regulatory bodies to ensure safety and efficacy, and the companies are required to go through the proper channels to get the vaccine approved before it can be sold.

It's important to be cautious about spreading misinformation or unproven claims, especially when it comes to something as serious as a global pandemic. Misinformation can lead to confusion and mistrust in important public health measures, such as vaccination, and ultimately put people's lives at risk.

Link to comment
Share on other sites

On 1/23/2023 at 3:36 PM, BacktoCricaddict said:

This is just mind-boggling.  I know, I know, many of you are in this area of work, but to poor me, a biochemistry teacher, it is all mind-boggling.  This AI chatbot passed the USMLE Step 1, Step 2 and Step 3 exams.  There was some selection bias - some image-related questions were removed etc., but still ... this is a giant leap. 

 

https://www.medrxiv.org/content/10.1101/2022.12.19.22283643v1

 

 

And as an academic who is supposed to evaluate students on exams, it is wild to see how all this works.  I put in 3 questions from a Biochemistry test into ChatGPT - they were either mathematical or diagnostic questions that I thought were clever enough to make my students sweat - and the damn Chatbot answered 2 of them perfectly well and the 3rd one was pretty damn close.  

 

All this online examination stuff from Covid days be damned.  Back to closed-book paper exams. 

Near passing grade does not mean clinical skills are there.

Link to comment
Share on other sites

57 minutes ago, Khota said:

Near passing grade does not mean clinical skills are there.

 

Absolutely true. Which is what I tell my pre-medical students - you can be the most brilliant person in the testing center, but that does not necessarily make you a good physician.

 

But then again, I wonder, can a robot be trained to not only use AI in diagnostics, but also behave in an empathetic, compassionate manner with patients? 

Link to comment
Share on other sites

54 minutes ago, BacktoCricaddict said:

 

Absolutely true. Which is what I tell my pre-medical students - you can be the most brilliant person in the testing center, but that does not necessarily make you a good physician.

 

But then again, I wonder, can a robot be trained to not only use AI in diagnostics, but also behave in an empathetic, compassionate manner with patients? 

The biggest problem with AI like chatgpt is AI ethics and how it will be used ? There will be job displacement. They will replace people for sure it always starts with ya they will complement people but companies are greedy and will replace them. McDonalds already has a robot  fast food place. This is coming at us hard and fast. People need to be worried and up skill themselves. 

Link to comment
Share on other sites

2 hours ago, gattaca said:

The biggest problem with AI like chatgpt is AI ethics and how it will be used ? There will be job displacement. They will replace people for sure it always starts with ya they will complement people but companies are greedy and will replace them. McDonalds already has a robot  fast food place. This is coming at us hard and fast. People need to be worried and up skill themselves. 

 

I was going to write a reply, but just asked ChatGPT to do it for me.

 

How will ChatGPT change the world for the worse?

As an AI, I do not have the ability to change the world in any way. However, if used improperly, AI like ChatGPT could perpetuate harmful biases, perpetuate misinformation or be used for malicious purposes. Additionally, the widespread use of AI could lead to job displacement and economic disruption. It is important for society to consider the ethical implications of AI and take steps to mitigate any negative effects.

 

What ethical implications exist for AI?

There are several ethical implications that exist for AI, including:

  1. Bias: AI models are only as unbiased as the data they are trained on. If the training data contains bias, the AI model will also be biased and could perpetuate discrimination.

  2. Privacy: AI systems often collect and process large amounts of personal data, which could be vulnerable to breaches and misused.

  3. Accountability: It is often difficult to determine who is responsible for decisions made by AI systems, which could lead to legal and ethical issues.

  4. Job displacement: The increasing use of AI could lead to job displacement, particularly in industries where tasks can be automated.

  5. Transparency: AI systems can be complex and difficult to understand, which could make it difficult for users to trust and verify their decisions.

  6. Explainability: AI-based decision making can be difficult to understand, this can make it hard for humans to trust and verify the decisions made by AI systems.

  7. Fairness: AI systems can perpetuate social and economic inequalities by amplifying pre-existing biases.

  8. Safety: AI systems could act in unintended and dangerous ways if they are not designed, tested and monitored properly.

These are some examples, there are more. As AI becomes more integrated into society, it is important for researchers, policymakers, and industry leaders to consider these ethical implications and work to mitigate any negative effects.

Link to comment
Share on other sites

@BacktoCricaddict We are all test subjects to chatgpt. The more data we put the more smart it will be become because it will imitate and think like human and eventual replacement. One of the reasons it is open source which is used to collect data and process it and understand the patterns of human thinking. We are building a sky net.

Edited by gattaca
Link to comment
Share on other sites

7 hours ago, BacktoCricaddict said:

 

Absolutely true. Which is what I tell my pre-medical students - you can be the most brilliant person in the testing center, but that does not necessarily make you a good physician.

 

But then again, I wonder, can a robot be trained to not only use AI in diagnostics, but also behave in an empathetic, compassionate manner with patients? 

It will happen, it is just a matter of time.

 

I have talked to few physicians, and they say practice of medicine is about following algorithms and protocols. So yes, robots will be able to do some of it.

Link to comment
Share on other sites

4 hours ago, Khota said:

@BacktoCricaddict what is the best prep material for step 1. Asking for a friend.

 

I've heard from my former students, now in med school, that they used UWorld.

 

I also use UWorld to help my students with the MCAT, and find them to be the best bang-for-your-buck. 

 

Disclaimer: I have no financial stake in UWorld or any other test prep company. 

Edited by BacktoCricaddict
Link to comment
Share on other sites

2 hours ago, BacktoCricaddict said:

 

I've heard from my former students, now in med school, that they used UWorld.

 

I also use UWorld to help my students with the MCAT, and find them to be the best bang-for-your-buck. 

 

Disclaimer: I have no financial stake in UWorld or any other test prep company. 

 

I love your disclaimer, but your advice is correct and consistent with what others say.

 

Thanks!

Link to comment
Share on other sites

ChatGpt is being for debugging, solving syntax errors by IT professionals.

 

https://www.indiatoday.in/technology/news/story/amazon-warns-employees-chatgpt-do-not-share-sensitive-info-with-chatbot-2327014-2023-01-27

 

Quote

By Abhik Sengupta: Amazon has reportedly warned employees to not put confidential data on ChatGPT, the AI-powered chatbot that is capable of solving complex queries in seconds. According to messages shared on an internal slack group and assessed by Business Insider, Amazon employees are using ChatGPT for research purposes and to solve daily problems. The report points out that some Amazon employees are using the chatbot to answer job interview questions, write software code, and create training documents. However, a corporate attorney associated with Amazon warned employees about its usage after the company reportedly witnessed ChatGPT mimic internal Amazon data.

The lawyer reportedly said, "This is important because your inputs may be used as training data for a further iteration of ChatGPT, and we wouldn't want its output to include or resemble our confidential information (and I've already seen instances where its output closely matches existing material)."

 

Link to comment
Share on other sites

I asked chatGPT to write a poetry about Indian cricket.

 

Cricket in India, a game of pride, A religion for millions, where passions collide.

From the streets to the stadiums grand, India's love for cricket is always in demand.

With bat in hand, and eyes on the ball, Our heroes give it their all.

The roar of the crowd, the thrill of the chase, Nothing compares to the love we embrace.

From Tendulkar to Kohli, legends we've made, Their performances, forever engraved.

The game of cricket, a test of skill, India's passion, forever will.

With every win, our spirits soar, India's love for cricket forever more.

So let's raise our voices, and sing with cheer, For Indian cricket, our love is here.

Link to comment
Share on other sites

×
×
  • Create New...