zlacker

[parent] [thread] 2 comments
1. westur+(OP)[view] [source] 2023-05-11 01:14:15
> Google’s approach is to label the images when they come out of the AI system, instead of trying to determine whether they’re real later on. Google said Shutterstock and Midjourney would support this new markup approach. Google developer documentation says the markup will be able to categorize images as trained algorithmic media, which was made by an AI model; a composite image that was partially made with an AI model; or algorithmic media, which was created by a computer but isn’t based on training data.

Can it store at least: (1) the prompt; and (2) the model which purportedly were generated by a Turing robot with said markup specification? Is it schema.org JSON-LD?

It's IPTC: https://developers.google.com/search/docs/appearance/structu...

If IPTC-to-RDF i.e./e.g. schema:ImageObject (schema:CreativeWork > https://schema.org/ImageObject) mappings are complete, it would be possible to sign IPTC metadata with W3C Verifiable Credentials (and e.g. W3C DIDs) just like any other [JSON-LD,] RDF; but is there an IPTC schema extension for appending signatures, and/or is there an IPTC graph normalization step that generates equivalent output to a (web-standardized) JSON-LD normalization function?

/? IPTC jsonschema: https://github.com/ihsn/nada/blob/master/api-documentation/s...

/? IPTC schema.org RDFS

IPTC extension schema: https://exiv2.org/tags-xmp-iptcExt.html

[ Examples of input parameters & hyperparameters: from e.g. the screenshot in the README.md of stablediffusion-webui or text-generation-webui: https://github.com/AUTOMATIC1111/stable-diffusion-webui ]

How should input parameters and e.g. LLM model version & signed checksum and model hyperparameters be stored next to a generated CreativeWork? filename.png.meta.jsonld.json or similar?

replies(1): >>westur+n1
2. westur+n1[view] [source] 2023-05-11 01:25:23
>>westur+(OP)
If an LLM passes the Turing test ("The Imitation Game") - i.e. has output indistinguishable from a human's output - does that imply that it is not possible to stylometrically fingerprint its outputs without intentional watermarking?

https://en.wikipedia.org/wiki/Turing_test

replies(1): >>kadoba+092
◧◩
3. kadoba+092[view] [source] [discussion] 2023-05-11 16:10:02
>>westur+n1
Implicit in the Turing test is the entity doing the evaluation. It's quite possible that a human evaluator could be tricked, but a tool-assisted human, or an AI itself could not be. Or even just some humans could be better at not being tricked than others.
[go to top]