News:

Welcome to the SoA Forum.  You are welcome to browse through and contribute to the Forums listed below.

Main Menu

AI archaeology fakes.....

Started by Imperial Dave, February 15, 2025, 06:10:20 PM

Previous topic - Next topic

Erpingham

It's an interesting question - we do seem to be exploring AI issues a lot of late  :)

She makes the point that AI images, even if not completely accurate, can help visualise e.g. things like scale, the relationship of archaeological features in a landscape.  I can also see how one might take a properly drawn reconstruction of a Hadrian's Wall fort, for example, and get the AI to bring it to life with figures, vehicles etc.

A purely decorative image "Draw me Dave Hollin as a late Roman Imperial usurper from Britain" might be fun for illustrating something in Slingshot  :)

Where it gets trickier is actually creating a reasonable reconstruction of something like a Romanised Hun.  The illustrations I've seen from AIs can be pretty good compositionally and even with some characterisation but how do we know it is a reasonable take on the evidence?  Does the "commissioner" (for want of a better word) of the image specify the sources?  Do we expect the AI to list its sources, like a good Osprey illustration?  I'm sure its all being considered by Richard as he regulates Slingshot content, along with AI written articles.

Imperial Dave

Former Slingshot editor

Jim Webster

Just out of interest, and spurred on by the various discussions of AI, I asked an AI to "write a story in the style of Tallis Steelyard."

It produced this   https://tallissteelyard.wordpress.com/2025/02/15/ah/

The general feeling from people commenting is that there are similarities but it lacks something. (You can see genuine stuff written by Tallis also in the blog)
I felt that Tallis had nothing to fear from the competition.

But then I asked it to write in the style of Jim Webster, mentioning some of the rural and agricultural blogs I've written.
This it did and it was scary. I could put it on the blog as it is and I'm not sure how many would notice. With a couple of minutes work nobody would notice.
I shared it with somebody who knows my work and she came back with

"Wow. This stuff is scary. It makes my flesh creep. Human-not-human. Really worried about what it's going to do to jobs, quality of writing, art, etc."

RichT

Quote from: Erpingham on February 15, 2025, 07:37:50 PMI'm sure its all being considered by Richard as he regulates Slingshot content, along with AI written articles.

I wouldn't be so sure!  :)

My limited experience of AI image generation for ancient history so far is that it is pretty disappointing - it's easy to get it to produce highly realistic, striking images, but extremely hard to get them historically accurate. I tried training an image AI to depict hoplites, but with generally poor results. AI images seem best for generic stuff, rather than technical specifics. I dare say someone with more skill or patience than me could produce better results, though.

As for AI generated content in Slingshot - detecting inaccuracy is something I can't really do with human contributors if it's outside my area of expertise so I don't think the situation is any different. I'm not averse to content produced by any means so long as it is not wilfully false, and in that I'm dependent on the contributor, however the words or images are actually created. 

Again, if anyone (human) wanted to explore this for Slingshot I think it would be interesting.

Imperial Dave

Ermmmmmm....get AI to write an article on AI writing articles...?
Former Slingshot editor

Erpingham

Quote from: RichT on February 15, 2025, 11:15:55 PMI'm not averse to content produced by any means so long as it is not wilfully false, and in that I'm dependent on the contributor, however the words or images are actually created. 

This seems a solid approach. We already expect contributors to avoid plagarism and copyright issues, and we would, I think, expect people to take some care about presenting accurate information, however it is generated. It seems to be the norm to identify AI generated images as such in publications, so presumably that would apply also?  There doesn't seem to be a similar convention with text, mind.

I think one of my concerns would be AIs embedding false information, which most tests suggest they do.  Popular history has enough problems with myths of uncertain origin as it is.

Keraunos

#7
All this talk of assessing the ability of AI to assess the quality of AI written rules brings to mind the imaginary menagerie managers imagining managing imaginary menageries!

And I realise, poor paleface that I am, that I have conflated two discussion threads.

RichT

Quote from: Erpingham on February 16, 2025, 10:13:00 AMI think one of my concerns would be AIs embedding false information, which most tests suggest they do.  Popular history has enough problems with myths of uncertain origin as it is.

Intent is important - if there is an intent to deceive (as when AI is used by students to produce course work for example), I'm against it.

As for false information - well, some human contributors to Slingshot have included plenty of false information, or at least false inferences, over the years  :o . But yes, using AI unquestioningly, like copying and pasting from Wikipedia, is not the way to go.

I'm interested in the creative (maybe I should scarequote 'creative') aspects of AI - like generating rules - more than just having it spout input facts (of dubious merit). As I understand it, increasingly AIs will be trained on material produced by AIs, so that errors and falsehoods become embedded, repeated and magnified. Just one more way in which the internet serves as the perfect tool for the dissemination of lies. :(