My gawds, some people need to learn what’s a homage and also stop being upset on behalf of others. This comic is fine, stop bellyaching. This is what terminal permission culture does to a motherfucker.
The only person who should care about anything other than the quality is Randall. However since he licensed it CC BY-NC 2.5 how he feels about it doesn’t really matter either.
We can probably infer by the licensing that he’s cool with it.
I think people should be concerned about things on others’ behalfs. We all need to stick together.
This situation is a send-up though. Totally not a concern.
Oh definitely! I just meant in this particular case.
What is terminal permission if I may ask?
Permission culture is a term primarily criticizing copyright law. Something that I would expect db0 to agree with! 🏴☠️
if someone is actually using ai to grade papers I’m gonna LITERALLY drink water
I’m gonna literally drink water if they DON’T
A new ripoff of an old classic
Is it a ripoff if they credit the original?
Are you implying that the credit is here? If so, where? I am not seeing it.
Lower right corner.
I honestly didn’t notice that - it was a bit small and pixelated, good catch
In a version that doesn’t even fully make sense. With databases there is a well-defined way to sanitize your inputs so arbitrary commands can’t be run like in the xkcd comic. But with AI it’s not even clear how to avoid all of these kinds of problems, so the chiding at the end doesn’t really make sense. If anything the person should be saying “I hope you learned not to use AI for this”.
Always satanise your inputs.
Reminds me of: https://www.wired.com/story/null-license-plate-landed-one-hacker-ticket-hell/
A guy thought it would be funny to change his license plate to NULL.
More like “And I hope you learned not to trust the wellbeing and education of the children entrusted to you to a program that’s not capable of doing either.”
Well that would require too much work invested into stealing of https://xkcd.com/327/
It could be credibly called an homage if it had a new punchline, but methinks the creator didn’t know what “sanitize” meant in this context.
One of the best things ever about LLMs is how you can give them absolute bullshit textual garbage and they can parse it with a huge level of accuracy.
Some random chunks of html tables, output a csv and convert those values from imperial to metric.
Fragments of a python script and ask it to finish the function and create a readme to explain the purpose of the function. And while it’s at it recreate the missing functions.
Copy paste of a multilingual website with tons of formatting and spelling errors. Ask it to fix it. Boom done.
Of course, the problem here is that developers can no longer clean their inputs as well and are encouraged to send that crappy input straight along to the LLM for processing.
There’s definitely going to be a whole new wave of injection style attacks where people figure out how to reverse engineer AI company magic.
LLM system input is unsanitizable, according to NVidia:
The control-data plane confusion inherent in current LLMs means that prompt injection attacks are common, cannot be effectively mitigated, and enable malicious users to take control of the LLM and force it to produce arbitrary malicious outputs with a very high likelihood of success.
https://developer.nvidia.com/blog/securing-llm-systems-against-prompt-injection/
I am extremely horrified by the prospect of GenAI grading.
You are roughly a decade late. Computers have been grading essays for a long time. The mcat for example hasn’t had human grading in about that long.
that’s not generative ai
plus humans choose the correct answers
Two muffins are baking in an oven. One muffin turns to the other and says “sure is hot in here isn’t it?”
To which the other muffin replies “Holy crap! A talking muffin!”Changing the muffins to cookies would not make it a different joke.
The funny thing about a comic is, you are able to express the idea without writing multiple paragraphs of words.
As a daily reader of SMBC, I can confidently tell you this rule is a suggestion at best.
How do you sanitize ai prompts? With more prompts?
It’s really easy, just throw an error if you detect a program will cause a halt. I don’t know why these engineers refuse to just patch it.