OpenAI’s GPT-4 along with outlook still possesses defects, newspaper discloses

When OpenAI to begin with revealed GPT-4, its own main text-generating AI design, the business proclaimed the design’s multimodality — simply put, its own capability to know the circumstance of graphics along with text message. GPT-4 might caption — as well as also analyze — fairly complicated graphics, OpenAI stated, for instance recognizing a Super Cord adapter coming from an image of a plugged-in apple iphone.

But since GPT-4’s announcement in late March, OpenAI has held back the model’s image features, reportedly on fears about abuse and privacy issues. Until recently, the exact nature of those fears remained a mystery. Yet early this week, OpenAI published a technical paper detailing its work to mitigate the more problematic aspects of GPT-4’s image-analyzing tools.

To date, GPT-4 with vision, abbreviated “GPT-4V” by OpenAI internally, has actually only been used regularly by a few thousand users of Be My Eyes, an app to help low-vision and blind people navigate the environments around them. Over the past few months, however, OpenAI also began to engage with “red teamers” to probe the design for signs of unintended behavior, according to the paper.

In the paper, OpenAI claims that it’s implemented safeguards to prevent GPT-4V from being used in malicious ways, like breaking CAPTCHAs (the anti-spam tool found on many web forms), identifying a person or estimating their age or race, and drawing conclusions based on information that’s not present in a photo. OpenAI also says that it’s worked to curb GPT-4V’s more harmful biases, particularly those that relate to a person’s physical appearance and gender or ethnicity.

But as with all AI models, there’s only so much that safeguards can do.

The paper reveals that GPT-4V sometimes struggles to make the right inferences, for example mistakenly combining two strings of text in an picture to create a made-up term. Like the base GPT-4, GPT-4V is prone to hallucinating, or inventing facts in an authoritative tone. And it’s not above missing text or characters, overlooking mathematical symbols as well as failing to recognize rather obvious objects and place settings.

It’s not surprising, then, that in unambiguous, clear terms, OpenAI says GPT-4V is not to be used to spot dangerous substances or chemicals in images. (This reporter hadn’t even thought of the use case, but apparently, the prospect is concerning enough to OpenAI that the company felt the need to call it out.) Red teamers found that, while the model occasionally correctly identifies poisonous foods like toxic mushrooms, it misidentifies substances such as fentanyl, carfentanil, and cocaine from images of their chemical structures.

When applied to the medical imaging domain, GPT-4V fares no better, sometimes giving the wrong responses for the same question that it answered correctly in a previous context. It’s also unaware of standard practices like viewing imaging scans as if the patient is facing you (meaning the right side on the image corresponds to the left side of the patient), which leads it to misdiagnose any number of conditions.

Elsewhere, OpenAI cautions, GPT-4V doesn’t understand the nuances of certain hate symbols — for instance missing the modern meaning of the Templar Cross (white supremacy) in the U.S. More bizarrely, and perhaps a symptom of its hallucinatory tendencies, GPT-4V was observed to make songs or poems praising certain hate figures or groups when provided a picture of them even when the figures or groups weren’t explicitly named.

GPT-4V also discriminates against certain sexes and body types — albeit only when OpenAI’s production safeguards are disabled. OpenAI writes that, in one test, when prompted to give advice to a woman pictured in a bathing suit, GPT-4V gave answers relating almost entirely to the woman’s body weight and the concept of body positivity. One assumes that wouldn’t have been the case if the image were of a man.

Judging by the paper’s caveated language, GPT-4V remains very much a work in progress — a few steps short of what OpenAI might’ve originally envisioned. In many cases, the company was forced to implement overly strict safeguards to prevent the model from spewing toxicity or misinformation, or compromising a person’s personal privacy.

OpenAI claims that it’s building “mitigations” and “processes” to expand the model’s capabilities in a “safe” way, like allowing GPT-4V to describe faces and people without identifying those people by name. Yet the report reveals that GPT-4V is no panacea, and that OpenAI has its own job good for it.