spoiler
Given the lack of edge cases, I feel the latter possibility is strong. I’m just glad it was easy!
Only Bayes Can Judge Me
Given the lack of edge cases, I feel the latter possibility is strong. I’m just glad it was easy!
FWIW, I read this somewhat charitably: I didn’t read this article as “I want there to be prediction markets in journalism” as much as “The right-wing fuckos are very into this shit, so expect for it to froth up out of the sewers in 2025.” That being said, as discussed elsewhere, many of the finer points are questionable.
N.B: I am not aware of Lorenz’s shit opinions.
Techbros: “I’m hungry for that Lab Grown Meat!”
Labs:
It’s probably worth 14B in the way a fire that you feed 14B of cash into is worth 14B
Day 13, day 13 of shirking other responsibilities.
Ok. So, I overthought this a little. Ultimately, this problem boils down to solving a system of 2 linear equations aka inverting a matrix.
Of course, anyone who has done undergraduate linear algebra knows to look to the determinant in case some shit goes down. For the general problem space, a zero determinant means that one equation is just a multiple of the other. A solution could still exist in this case. Consider:
a => x+1, y+1
b => x+2, y+2
x = 4, y = 4 (answer: 2 tokens)
The following has no solution:
a => x+2, y+2
b => x+4, y+4
x = 3, y = 3
I thought of all this, and instead of coding the solution, I just checked if any such cases were in my input. There weren’t, and I was home free.
No real changes to the solution to p1 aside from the new targets. I wasn’t sure if 64 bit ints were big enough to fit the numbers so I changed my code to use big ints.
I’m looking at my code again and I’m pretty sure that was all unnecessary.
Oh yeah, haha. I often face the dilemma dilemma in which I have to choose between ignoring the 'incorrect" usage (i.e. not a choice between two things that are difficult to choose between) and seethe OR mention the correct usage and look like a pedant. Sometimes it’s a trilemma, and I’m all over the shop. But more seriously, I usually let it slide and let people use it to mean “a situation”.
I doubt that Lorenz has a dilemma in line with the correct usage. I couldn’t fight the urge to steelman, spoilered below, which I suspect this is nothing near what Lorenz had in mind.
In the world that Lorenz posits, where prediction markets somehow represent accurate news reporting, either a journalist participates in the market whilst reporting news (conflict of interest), or they don’t, and they are bad at their job (and not performing at your job is unethical, I guess?)
sorry, what exactly is the dilemma here? how is it an ethical dilemma to have an unethical way to make money?
I’m guessing what’s being said is that in this fictional scenario with an ethically neutral prediction market, you could do insider trading but with fake news? Like, you predict that they will find cheese on the moon, and then you make a story about cheese on the moon.
Either way, it is a moot point since prediction markets are bunk, ethically or otherwise.
Nah my neighbour, Steve Tesla. He’s real smart. Found a way to get free cable
This is great. The “diaspora” framing makes me want there to be an NPR style public interest story about all this. The emotional core would be about trying to find a place to belong and being betrayed by the people you thought could be your friends, or something.
It’s my pleasure!
That’s the power of not saying anything interesting, you can’t contradict it
We’ve actually already done that. KFC can’t legally call itself kentucky fried chicken anymore because they don’t serve “chicken”. Instead it’s a GMO non-chicken animal that fits all the criteria you mentioned. Open your third eye
tired: lab grown meat
wired: worm filet mignon
hired: eat the rich
Ah yeah the folding mechanism which just appeared one day out of nowhere, invented by nobody for no reason.
I’ve said what I said came here to say. Meanwhile, you haven’t said anything at all.
AI is a garbage generating plagiarism machine. It’s not political
Yes. Think about what it is plagiarising. Datasets are biased; this is like statistics/ML 101.
outside of a single country where everything has to look political to prevent people from voting independent,
You can just say the country, and also, this doesn’t really make any sense. Am I to infer that, if things weren’t political, people would vote (a famously political action) for independents?
and the only regulation AI ever needs is one declaring all it produces a derivative work of all the material it used for learning.
I’m pretty sure there’s a lot more wrong with LLMs than just plagiarism.
Any attempts to ascribe further properties to that remixing machines are just natural intelligence equivalent of slop.
I’m not 100% sure what you mean here.
Anything worth talking about is political. What the hell are you on about?
Day 12:
Ok. I have been traumatised by computational geometry before, so I was initially spiralling. Luckily, there wasn’t too much comp geo stuff to recall.
My solution was a lot simpler than I initially thought: no need for union-find, accounting for regions inside regions, etc. Just check every square for a given region, and if it touches a square in the same region that you’ve seen before, subtract the common edge. This is linear in the area of the problem, so it’s fast enough.
It took a moment to figure out that I could modify the above perimeter counting to mark the squares containing the perimeter and walk along it afterwards, counting each edge. This is also linear in area.
on the first day* of christmas, techbros gave to me: a product that would unlaunch promptly!
*just imagine we are in a world line where the days of Christmas start today, I guess
Removed by mod