• 0 Posts
  • 10 Comments
Joined 1 year ago
cake
Cake day: July 9th, 2023

help-circle


  • Because a lot of CEOs these days only care about quarterly reports. When interest rates went up, companies cost to do business also went up, so to keep the red profit line going up, they had to cut costs somewhere. Labor makes up most of the expenses so layoffs and forced RTO happened.

    These CEOs don’t care that they lose years of experience when employees leave. And by the time the lack of experience catches up to the companies shitting themselves, the CEOs hope to have moved on to something else with their massive stock rewards for “increasing shareholder value”. Even the Boeing CEO who wasn’t lucky enough to leave before shit hit the fan is going to get a golden parachute. So really no downside for them.


  • ofcourse@lemmy.mltoEuropeToo many tourists?
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    3 months ago

    We also need to know what proportion of the city’s economy is driven by tourism. For a tourism dominated city, it feels backward for the local population to complain about it. Unless it’s the retired folks in these cities who are complaining the loudest after benefitting from the same tourism earlier.



  • It is so amazing to read an article I can relate to so well. Because as someone who feels very similarly and constantly about the pointlessness of life but not always an active suicide plan, it also feels lonely. Not lonely as in having no one around me but lonely as in that there’s no one else who truly understands how I feel about life. Because when I mention it, my therapists get worried and want to talk about a safety plan. I’m glad I have a safety plan but that’s not what I’m going through. I just don’t know what’s the purpose of my life in this world sliding toward doom and so i keep getting automatic thoughts that I’d be better off dead. Which is different than I want to kill myself. And so I don’t talk about it most of the time.

    I’m grateful for the bravery of the author to write about this feeling so well and put this article out into the world. It made me feel a little less alone.


  • The criticism from large AI companies to this bill sounds a lot like the pushbacks from auto manufacturers from adding safety features like seatbelts, airbags, and crumple zones. Just because someone else used a model for nefarious purposes doesn’t absolve the model creator from their responsibility to minimize that potential. We already do this for a lot of other industries like cars, guns, and tobacco - minimize the potential of harm despite individual actions causing the harm and not the company directly.

    I have been following Andrew Ng for a long time and I admire his technical expertise. But his political philosophy around ML and AI has always focused on self regulation, which we have seen fail in countless industries.

    The bill specifically mentions that creators of open source models that have been altered and fine tuned will not be held liable for damages from the altered models. It also only applies to models that cost more than $100M to train. So if you have that much money for training models, it’s very reasonable to expect that you spend some portion of it to ensure that the models do not cause very large damages to society.

    So companies hosting their own models, like openAI and Anthropic, should definitely be responsible for adding safety guardrails around the use of their models for nefarious purposes - at least those causing loss of life. The bill mentions that it would only apply to very large damages (such as, exceeding $500M), so one person finding out a loophole isn’t going to trigger the bill. But if the companies fail to close these loopholes despite millions of people (or a few people millions of times) exploiting them, then that’s definitely on the company.

    As a developer of AI models and applications, I support the bill and I’m glad to see lawmakers willing to get ahead of technology instead of waiting for something bad to happen and then trying to catch up like for social media.




  • The GitHub copilot example seems to indicate it’s a pricing problem. In fact this situation might indicate that users are finding it so useful that they are using it more than MS expected when they set up their monthly subscriptions. Over time, models are going to be optimized and costs will reduce.

    Expecting AI to take over all human intensive tasks is not realistic but eventually it’s going to become part of a lot of repetitive tasks. Though I hope that we see more open source base models instead of the current situation with 3-4 major companies providing the base models behind most of the AI applications.