Day 9: Disk Fragmenter
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
So cool, I was very hyped when I managed to squeeze out the last bit of performance, hope you are too. Especially surprised you managed it with python, even without the simple tricks like trees ;)
I wanted to try it myself, can confirm it runs in under 0.1s in performance mode on my laptop, I am amazed though I must admin I don’t understand your newest revision. 🙈
Just to let you know, I posted the fastest python version I could come up with. Which took heavy inspiration from [ link to github ]
supposedly O(n) linear time, and does seem to work really fast.
Thanks! your Haskell solution is extremely fast and I don’t understand your solution, too. 🙈 lol
My latest revision just keeps a dict with lists of known empty slots with the length being the dict key, including partially filled slots. I iteratively find the slot that has the lowest index number and make sure the lists are properly ordered from lowest to highest index number.
looking at the challenge example/description, it shows a first pass only type of “fragmenting”. we can be confident that if something did not fit, it can just stay in the same spot even if another slot frees up enough space for it to fit. so just checking if current index is lower than the lowest index number of any of the slot lengths would just be enough to stop early. That is why I got rid of
last_consecutive_full_partition
because it was slowing it down by up to 2 seconds.in example, even if
5555
,6666
, or8888
can fit in the new spot created by moving44
, they are staying put. Thus a first pass only sort from back to front.00...111...2...333.44.5555.6666.777.888899 0099.111...2...333.44.5555.6666.777.8888.. 0099.1117772...333.44.5555.6666.....8888.. 0099.111777244.333....5555.6666.....8888.. 00992111777.44.333....5555.6666.....8888..
I only now found your edit after I had finished my previous comment. I think splitting into two lists may be good: one List of Files and one of Empty Blocks, I think this may not work with your checksumming so maybe not.
Thank you for the detailed explanation!, it made me realize that our solutions are very similar. Instead of keeping a
Dict[Int, List[Int]]
where the value list is ordered I have aDict[Int, Tree[Int]]
which allows for easy (and fast!) lookup due to the nature of trees. (Also lists in haskell are horrible to mutate)I also apply the your technique of only processing each file once, instead of calculating the checksum afterwards on the entire list of file blocks I calculate it all the time whenever I process a file. Using some maths I managed to reduce the sum to a constant expression.
yeah, I was a bit exhausted thinking in a high level abstract way. I do think that if I do the checksum at the same time I could shave off a few more milliseconds. though it is at like the limits of speed, especially for python with limited data types(no trees lol). Decently fast enough for me :)
edit: I also just tested it and splitting into two lists gave no decent speed up and made it slower. really iterating backwards is fast with that list slice. I can’t think of another way to speed it up past it can do rn
Trees are a poor mans Sets and vice versa .-.
ah well, I tried switching to python’s
set()
but it was slow because of the fact it is unordered. I would need to use amin()
to find the minimum index number, which was slow af. indexing might be fast butpop(0)
on a list is also just as fast.(switching to deque had no speed up either) The list operations I am using are mostly O(1) timeIf I comment out this which does the adding:
so that it isolates the checksum part. it is still only 80-100ms. so the checksum part had no noticeable slowdown, even if I were to do the check sum at the same time I do the sorting it would not lower execution time.
Thank you for trying, oh well. Maybe we are simply at the limits.
so if I look at each part of my code. the first 4 lines will take 20 ms
The part1 for loop will take 10 ms.
The for loop to set up
next_empty_slot_by_length
will take another 10 ms.The part2 for loop will take 10 ms, too!
and adding up the part2 checksums will add another 10 ms.
So, in total, it will do it in ~60 ms, but python startup overhead seems to add 20-40 ms depending if you are on Linux(20 ms) or Windows(40 ms). both are Host, not virtual. Linux usually has faster startup time.
I am not sure where I would see a speed up. It seems that the startup overhead makes this just slower than the other top performing solutions which are also hitting a limit of 40-60 ms.
no way, someone is able to do it in O(n) time with OCaml. absolutely nutty. lol
Thank you for the link, this is crazy!