Discussion regarding AI usage in forum posts

OT: I want to contradict to your statement vehemently. There is only one thing worse than having no information. And that is if you have wrong or incomplete information!

17 Likes

Yes IF. It’s quite correct though. Now go back ontopic, you heard the moderator!

Fairphone will hopefully roll out fixes by the end of 2025, or either before Google stops patching security holes.

“quite correct” != “correct” and partly the text is just misleading.

And also this is a forum where users share their experience and knowledge. And this pointless, endless AI drivel is truly annoying here.

5 Likes

Common man, stop whining and start contributing something ontopic yourself :laughing: If you want to discuss why or how updates are not coming through, start a topic elsewhere.

I already did: see Android 15 is Now Rolling Out to Fairphone 4! 📲 (FP4.QREL.15.14.3.20250923) - #250 by Lars_Hennig

@yvmuell Can we have the discussion regarding AI drivel as a separate, general diuscussion thread. I guess it is important how the community wants to handle AI slop in the forum.

4 Likes

done.

for reference, the post that caused the discussion.

And a few words: I also use AI to get an idea where to look further, however thats the point for me, I dont take anything AI is saying for granted and than take this and check further if I’m interested in the topic. I also posted here 2-3 times something from AI as information. I think its a good practice to indicate when I post something from AI, what I havent verified further, so that everyone is aware its AI “knowledge” and those interested in the topic can check further on their own.

5 Likes

For me, posting unreflected and not controlled info from AI is useless. Appropriately marking such posts is a good idea IMO.

As a pointer for further discussions it is good, but only if the poster has a certain knowledge on his own to be able to check the information.
And another point for me is, that AI slop is usually a lot of babble without providing additional information and I find it very hard to read in the forum.

EDIT: I just remembered the golden rule of the radio operators when I was in the army:
THINK - PUSH - TALK
Some things never seem to change, even if the underlying technology does :wink:

14 Likes

I think everyone can go to https://chatgpt.com/ and enter their question. If it’s reposting that, it doesn’t add any value.
It’s also tiring to read in the forum.
Of course, I use LLMs as a starting point for many things, but just reposting stuff without checking and altering is more destructive than it is constructive and it spreads false knowledge
But if you clearly mark a strategy as from AI and you can’t test it yourself, that’s still ok I guess

7 Likes

And also no, not at all. From the around 8 things, you suggested, one was correct.

5 Likes

I would disagree and say: I dont need to verify before posting to give some keywords to others to further check, as I did here

I also think I neither need special knowledge upfront, if at all I need the capability to verify by reading more.

In general: why should I take anything for granted someone I dont know is saying here? I never know what this “knowledge” is based on. It still can be AI even if the person does not copy the AI answer here. So what do you do then? You use common sense and ask for a reference or check on your own what info you can find. Thats what you still could do when you see obvious AI posts.

1 Like

I won’t touch AI. It is totally unreliable and things are only right by chance.

Therefore I would be happy if we could ban this technology entirely within this forum.

4 Likes

Thats completely unrealistic or neglecting reality in my eyes. AI is not only ChatGPT its everywhere and even if we would forbid to copy and paste from ChatGPt you never ever know where people base their opinion on. Just because you dont see it abviously it doesnt mean its not AI based. Its up to each individual to use common sense and make informed decisions.
I prefer to see the obvious, to better know what to do with it.

2 Likes

I have very strong opinions against AI on a technical / ethical basis (which I’m not going to go into here), so I block everyone who I see posting AI output.

I’m here to help people, so when someone wastes my time with that stuff I’m out. I need factual information from people with problems and I need factual information from people who are trying to help, everything else is just noise.
Human errors are part of the troubleshooting process, that is fine, but I’m not starting to also debug the output of fancy autocomplete.

I also don’t consider LLMs a valid research tool, especially in these edge cases that sometimes show up here, where the actual implementation of certain functions can vary wildly between manufacturers and devices.
I don’t want AI search, I have hyperfocus and AuDHD for that.

11 Likes

You wrote:

The below is according to AI (so I’m sceptical how correct this is) required. Overall AI states …

I like that, maybe this could become a standard, then we know what we are working with. A

precisely. why should I bother to read something that someone else didn’t bother to write?

8 Likes

Not at all: There are several errors in what you posted, which could send naive/not technically minded people on a wild goose chase. For instance, it’s not just Orange Belgium, it’s all Orange (French multinational telecommunications company owning multiple international operators), and yes, they can and do distribute bloatware, I’ve seen it with my own eyes, on my own FP4 (bought online from Fairphone, and on which Orange FR automatically installed at first launch a dozen unneeded and unwanted apps). Also, Orange users would be well advised to avoid this update as it is now (other users too BTW, unless of course if you have a passion for Russian roulette and need the Adrenalin kick more than you need a working phone). :roll_eyes:

.

That been said, I’m still hoping Fairphone will eventually manage to fix their A15 update. Since it actually works on many phones but not all, it is clearly not a case of “sorry, it can’t be done“, it’s just a case of bad interaction with something some people have and others don’t, meaning we can (?) hope they will eventually find the problem.

1 Like

The criticism that unreflective, AI-generated content produces large volumes of text that are not immediately recognizable as low quality is understandable. This kind of content floods discussion spaces and significantly degrades their quality.

At the same time, the circumstances require the use of translation tools. From there, the step to large language models is small, both technically and practically. What matters, however, is not the tool itself but how it is used. Texts are deliberately improved rather than automatically generated. When a formulation works, it is intentionally left unchanged; the involvement of an LLM does not need to be concealed.

This does not imply uncritical adoption. In almost every case, the outputs contain serious errors and factual inaccuracies that must be corrected.

The real difference lies in the working process. Texts are developed sentence by sentence or paragraph by paragraph and then reviewed and revised. This approach is fundamentally different from quickly generating several A4 pages via a prompt and publishing them without filtering.

Against this background, a more differentiated assessment seems appropriate.

The central advantage of this approach lies in linguistic quality: it produces clear, well-formed English sentences with formulations that would hardly be attainable without assistance. The result is far more precise and comprehensible than one’s own error-prone English and enables meaningful participation in discussions.

This text was written in my native language using AbiWord, then improved with the help of an LLM, and subsequently translated into English with an LLM and manually revised.

I had to read the text twice understand it and it still contains some errors and cryptic passages. And it it so unbelievably long, compared to the content.

For instance: Translations are nowadays almost always done with LLMs (DeepL).

And frankly, I guess next to no one will use this approach.

6 Likes

I knew when I started reading that this text used an LLM, because it’s overly verbose and tries to fluff out every sentence with additional unnecessary stuff, it’s hard to read. As someone who often writes overly long sentences, with too much information crammed in everywhere (the ADHD urge to just add a bit more), I’m frankly appalled it’s trying to take my job :smirking_face:

Humans make mistakes, English isn’t my native language either, but I’d rather have a rambly stream of consciousness I can barely understand, then whatever this is.
We aren’t writing novels or scientific papers here, you aren’t paid on character count, nobody writes like that in casual conversations, it’s barely comprehensible.

DeepL is a good example for why LLM translations are a problem. Their old ML based model (that thankfully still exists) was one of the best translation tools out there, but their new default model suffers from the same issues other LLMs have, it sometimes adds things that weren’t in the original text, the accuracy has gone down.

The problem with LLMs, especially the chatbot variety, is that they add randomness to make the output more non-deterministic. The result is they feel more human like and can produce results that diverge from the training data, but that also means they will sometimes output stuff that is basically random noise (like sources, links, people, etc. that don’t exist for example).

If you ask the same question multiple times you aren’t guaranteed the same answer, that is not a great basis for a translation tool, and it also isn’t a great source for information.
LLMs are good at writing text that looks like coherent text, and that might be helpful in some cases, but stringing words together in a way that’s plausible doesn’t mean the output is factual.

7 Likes

Quoting justsendtheprompt.com:

Are you about to copy and paste the output of an LLM into an email, comment, ticket, or anything that another human is expected to read?

Don’t!

Just send the prompt

There’s no point to what you’re doing.

  • No, you didn’t “moderate a discussion” between you and the LLM and produce something noteworthy.

  • No, your “careful review” was not valuable.

  • No. It’s not different when you do it.

  • Yes, you are just producing slop.

Just send me the prompt.

4 Likes