• 0 Posts
  • 77 Comments
Joined 3 months ago
cake
Cake day: March 28th, 2024

help-circle
  • Its already in your word. Proportional. A proportional control, or P control (generally, a PID with the K_I and K_D set to 0)

    Alright some edit as I will try to explain my answer. Say for example a value ‘x’ is > 0 and < 100 (so, 0 < x < 100). In this case, the point that you wanted to reach is 100, such that x is always getting bigger and closer to 100 (x->100). If you subtract x from 100, you get the remaining, call it ‘y’, that you need to add to get to 100 (y=100-x). So now, the rate of change must be proportional to that number. I.e., as x gets closer to 100, y will get smaller. How much faster is determined by a proportionality constant, that can be called K_P.



  • Well, this is just my 2-cent. I think you misunderstand the point I am making. First of all, accept that translation is a lossy process. A translation will always lose meaning one way or another, and without making a full essay about an art piece, you will never get the full picture of the art when translated. Think of it this way, does Haiku in Japanese make sense in English? Maybe. But most likely not. So anyone that wanted to experience the full art must either read an essay about said art or learn the original language. But for story, a translation can at least give you the gist of the event that is happening. Story will inherently have event that have to be conveyed. So a loss of information from subtlety can be tolerated since the highlight is another piece (the string of event).

    Secondly, how the model works. GPT is a very bad representation for translation model. Generative Pretrained Transformer, well generate something. I’d argue translation is not a generative task, rather distance calculation task. I think you should read more upon how the current machine learning model works. I suggest 3Blue1Brown channel on youtube as he have a good video on the topic and very recently Welch Labs also made a video comparing it to AlexNet, (arguably) the first breakthrough on computer vision task.







  • I cannot find the reference to the port being flimsy. I did however find the part where the top hotswap component (touchpad and the place where the plate is) is having problems. The only side port that they mention is the charging port. But the again, as I said, the firmware must be redone to account for said removeable dGPU. Now you may be wondering how big of an effect does it make when adding removeable dGPU. Off the top of my head, the motherboard must have the power supply circuitry remade to account for the additional power draw when needed. That alone will make the firmware for power control need to be redone. It can have wide range of effect for other components too because power firmware is really far reaching and may break assumption in other firmware. Not to mention a part of the cooling system is also removeable now. Framework has gone out of their way trying to invent a new standard for removeable dGPU on a laptop.

    Btw, here is the quoted article that mentions the side port.

    Twice, the touchpad suddenly stopped scrolling and stopped accepting button presses until I physically removed it from the system and reseated it. I’ve repeatedly gotten a Windows message about how my “USB device might have limited functionality when connected to this port” even if I’m just plugging in the charger.



  • Uhh, does the model 13 have a modular panel? IIRC, they don’t. Also, manufacturing modular panel and modular port are very different and the knowledge transfer cannot be that big. The port for example has a looser tolerance since they aren’t really that visible most of the time. So being snug but not flush is good enough. I can imagine the panel doesn’t have that luxury. Stability issue, that I can agree. But then again, I’ll give them the benefit of the doubt since they must handle additional assumptions that cannot be made on other laptops. Namely, modular GPU. Writing a firmware with that new assumption could be a PITA.



  • Easy or not depends vary wildly. But the usual task is

    • partition the drive
    • format the drive
    • mount the drive
    • install the base system

    That is the bare minimum, but we need to do more configuration to be able to boot. Hence the next task is configuring the following

    • fstab
    • timezone, hostname, and networking
    • boot loader (I just use the EFI directly nowadays)

    That is it. Everything else is usually work specific. Like, if you wanted arch to be a server, you usually didn’t install a GUI. For workstation and gaming, you need more steps but it will vary depending on hardware. The archwiki covers a good deal of hardware from laptop to desktop and their quirks.





  • Uhhh, no. I think it is better to implement something akin to federation than breaking up a company just because. If anyone wanted to sue valve, then they can enforce interoperability at the very least. But not dividing their business model. We don’t force apple to split their software and hardware did we? We force apple to have a choice of interoperability. From then, it is all fair since anyone can link their data from valve and any other store that opt to implement the interoperability protocol.



  • Why can’t anyone develop said features? Should the competitor worsen themselves just because no one is able to develop the same features? As far as I remember, valve doesn’t patent something ridiculous like regional pricing or family sharing, so anyone is welcome to develop it themselves. They even make proton open source but apparently Epic doesn’t like the idea of them on the linux market.


  • So let me get this straight. Any client that wanted to have steam features, like the forum, hosting, workshop, chat, and all the jazz, should be able to do so without paying steam any fee? Why didn’t they develop it themselves? Or should steam sell that as a service to those who wanted it? Say for example, epic wanted to have family sharing. Steam should sell their family sharing feature to epic as a service?