How my Firefox became a LibreWolf
New blog-post: https://www.davidrevoy.com/article1065/how-my-firefox-became-a-librewolf
@protonprivacy also flounced off Mastodon after they received well deserved criticism.
Is @mozillaofficial doing the same?
@simonzerafa @protonprivacy @mozillaofficial I'm not sure how official the account you linked here is, the branding feels off and it's not really noted in the bio this is their official new account.
Before December, Mozilla had their instance, and decided to remove it ( src. https://techcrunch.com/2024/09/17/mozilla-exits-the-fediverse-and-will-shutter-its-mastodon-server-in-december/ )
Proton also really manage their PR like rotten potatoes imo ( https://theintercept.com/2025/01/28/proton-mail-andy-yen-trump-republicans/ )
... both company also abuse marketing bs wording and praise AI. Not surprising they are criticized here.
@davidrevoy @simonzerafa @protonprivacy @mozillaofficial I mean something as simple like text-to-speech or OCR or machine translation models (like Google Translate/DeepL) can be considered "AI" technically. I'm a student and many courses involving creating our own models from scratch and many of my projects involving use models like tesseract-ocr, turtle-tts, YOLO, LLaMA,... for some tasks and this is painful for me as many outputs don't even make sense and I had to tweak a lot.
@davidrevoy @simonzerafa @protonprivacy @mozillaofficial Though I noticed that despite the claims of AI bros that their models are better, or they can solve something, they rarely say exactly what model they use (or just algorithms or more simple models cloaking as "AI" for them).
@davidrevoy @simonzerafa @protonprivacy @mozillaofficial So I think if Mozilla just say they are using machine translation (for their recent translation feature) or the devs' developing feature to add alt-text is using a image describing model (they literally fine-tuned a CLIP model) instead of saying "AI" in the way current AI bros do (Oh god even #Framework was paid by #AMD to say like this) and be more transparent of their investments they would have less backlash.
@davidrevoy @simonzerafa @protonprivacy @mozillaofficial Also note that compared to other models like text-to-speech, OCR, image description/classification, speech-to-text and multiple other models LLMs (like GPT models used in ChatGPT) and image generation models (like MidJourney) are much much larger than the former models and thus have a lot of environmental impact just for running them (this one people may know but I'm a CS student and know technical reasons).
@davidrevoy @simonzerafa @protonprivacy @mozillaofficial For example, the base HuBERT model (which can be used for speech recognition, speech-to-text, emotion recognition,...) is only around 360-370 MB while even the smallest LLM (the one that similar to the models powering ChatGPT) is LLaMA 3 8B is around 16 GB.
@davidrevoy @simonzerafa @protonprivacy @mozillaofficial Even worse that unlike other companies and projects that build other types of models, generative AI models like LLM and image generation (usually stable diffusion models) models are often trying to improve their efficiency but making their models more complex and much much larger and large companies are pouring trillions of dollars in that, which helped create the AI bubble and worsening the environment issues.