AI firms want to trust but not everyone’s willing to put in the work to get it
Legitimate concerns about the evolving capabilities of AI are growing in the pop cultures of developed economies but the industry still is inadequately addressing the basics of trust.
AI analysts are talking about 2023 hardware and software expectations but missing is a call for action to build trust among the public, who mostly are the subject of algorithmic operations.
This is particularly true in the United States where the government and industry assume that people will get with the program or will have the program forced on them.
Four-year-old Lensa, a portraiture algorithm owned by Prisma Labs, is the latest in a slow parade of AI code that has intrigued (for the most part) and alarmed technophiles. For a fee, consumers can upload photos of themselves, pick an artistic style and get a few AI-rendered profile images in that style.
Multiple concerns with Lensa have been voiced, some more worthy of exclamation points than others.
Some think Lensa is a Chinese company harvesting personal data. But photo- and video-editing software maker Prisma‘s headquarters are in California.
Being trained on the internet (the often-cited excuse for ridiculous AI results), Lensa depicts women in more sexualized clothing and poses.
A quick look at Twitter threads does indicate that a business shot of a woman comes out the other end in revealing outfits, often with very ample breasts even when nothing below the shoulder was submitted. It happens when fantasy and anime (generally sexualized styles) are chosen or more straightlaced designs are requested.
And some Asians feel their portraits are drawn in a way that makes them look East Asian if not White. Evidence can be seen for this, too.
But let us go back to the first concern again. Essentially, people are worried about how their biometric data will be used and managed. Adding China to the equation when it comes to talk of biometric surveillance is always a smart move even if it is not always accurate. (They may be thinking of Different Dimension Me.)
Prisma’s fine print is at best clumsy when dealing with data protection.
Processing of submitted photos occurs through Stability AI’s Stable Diffusion algorithm. And the company says it holds to the purpose limitation and data minimization principles.
The company says face biometrics are not used to identify or profile subscribers and are not used for authentication, advertising and marketing. They can, however, be used for making Lensa AI code better, a commercial use that some privacy advocates say should not happen, at least without express consent and, perhaps, compensation.
Prisma’s policy is to not “Transfer, share, sell, or otherwise provide the Face Data to advertising platforms, analytics providers, data brokers, information resellers or other such parties.”
That is a standard point in relevant software boilerplate.
Those rights given to the company exist, according to the contract, do not include ownership, but to many impose conditions just short of ownership. The capabilities will be used only to operate or improve Lensa, according to Prisma.
Assuming someone using an application marketed as a silly, fun function were to read the fine print, it would be difficult to know what Prisma is saying about how it will handle that person’s face biometrics.
That is just not a trust-building step.
Article Topics AI | biometrics | data protection | face biometrics | facial analysis | Lensa | mobile app | selfie biometrics
. . .Read more at www.biometricupdate.com