Day 16: Reindeer Maze
Megathread guidelines
- Keep top level comments as only solutions, if you want to say something other than a solution put it in a new post. (replies to comments can be whatever)
- You can send code in code blocks by using three backticks, the code, and then three backticks or use something such as https://topaz.github.io/paste/ if you prefer sending it through a URL
FAQ
- What is this?: Here is a post with a large amount of details: https://programming.dev/post/6637268
- Where do I participate?: https://adventofcode.com/
- Is there a leaderboard for the community?: We have a programming.dev leaderboard with the info on how to join in this post: https://programming.dev/post/6631465
C
Yay more grids! Seemed like prime Dijkstra or A* material but I went with an iterative approach instead!
I keep an array cost[y][x][dir], which is seeded at 1 for the starting location and direction. Then I keep going over the array, seeing if any valid move (step or turn) would yield to a lower best-known-cost for this state. It ends when a pass does not yield changes.
This leaves us with the best-known-costs for every reachable state in the array, including the end cell (bit we have to take the min() of the four directions).
Part 2 was interesting: I just happend to have written a dead end pruning function for part 1 and part 2 is, really, dead-end pruning for the cost map: remove any suboptimal step, keep doing so, and you end up with only the optimal steps. ‘Suboptimal’ here is a move that yields a higher total cost than the best-known-cost for that state.
It’s fast enough too on my 2015 i5:
day16 0:00.05 1656 Kb 0+242 faults
Code
#include "common.h" #define GZ 145 enum {NN, EE, SS, WW}; static const int dx[]={0,1,0,-1}, dy[]={-1,0,1,0}; static char g[GZ][GZ]; /* with 1 tile border */ static int cost[GZ][GZ][4]; /* per direction, starts at 1, 0=no info */ static int traversible(char c) { return c=='.' || c=='S' || c=='E'; } static int minat(int x, int y) { int acc=0, d; for (d=0; d<4; d++) if (cost[y][x][d] && (!acc || cost[y][x][d] < acc)) acc = cost[y][x][d]; return acc; } static int count_exits(int x, int y) { int acc=0, i; assert(x>0); assert(x<GZ-1); assert(y>0); assert(y<GZ-1); for (i=0; i<4; i++) acc += traversible(g[y+dy[i]][x+dx[i]]); return acc; } /* remove all dead ends */ static void prune_dead(void) { int dirty=1, x,y; while (dirty) { dirty = 0; for (y=1; y<GZ-1; y++) for (x=1; x<GZ-1; x++) if (g[y][x]=='.' && count_exits(x,y) < 2) { dirty = 1; g[y][x] = '#'; } } } /* remove all dead ends from cost[], leaves only optimal paths */ static void prune_subopt(void) { int dirty=1, x,y,d; while (dirty) { dirty = 0; for (y=1; y<GZ-1; y++) for (x=1; x<GZ-1; x++) for (d=0; d<4; d++) { if (!cost[y][x][d]) continue; if (g[y][x]=='E') { if (cost[y][x][d] != minat(x,y)) { dirty = 1; cost[y][x][d] = 0; } continue; } if (cost[y][x][d]+1 > cost[y+dy[d]][x+dx[d]][d] && cost[y][x][d]+1000 > cost[y][x][(d+1)%4] && cost[y][x][d]+1000 > cost[y][x][(d+3)%4]) { dirty = 1; cost[y][x][d] = 0; } } } } static void propagate_costs(void) { int dirty=1, cost1, x,y,d; while (dirty) { dirty = 0; for (y=1; y<GZ-1; y++) for (x=1; x<GZ-1; x++) for (d=0; d<4; d++) { if (!traversible(g[y][x])) continue; /* from back */ if ((cost1 = cost[y-dy[d]][x-dx[d]][d]) && (cost1+1 < cost[y][x][d] || !cost[y][x][d])) { dirty = 1; cost[y][x][d] = cost1+1; } /* from right */ if ((cost1 = cost[y][x][(d+1)%4]) && (cost1+1000 < cost[y][x][d] || !cost[y][x][d])) { dirty = 1; cost[y][x][d] = cost1+1000; } /* from left */ if ((cost1 = cost[y][x][(d+3)%4]) && (cost1+1000 < cost[y][x][d] || !cost[y][x][d])) { dirty = 1; cost[y][x][d] = cost1+1000; } } } } int main(int argc, char **argv) { int p1=0,p2=0, sx=0,sy=0, ex=0,ey=0, x,y; char *p; if (argc > 1) DISCARD(freopen(argv[1], "r", stdin)); for (y=1; fgets(g[y]+1, GZ-1, stdin); y++) { if ((p = strchr(g[y]+1, 'S'))) { sy=y; sx=p-g[y]; } if ((p = strchr(g[y]+1, 'E'))) { ey=y; ex=p-g[y]; } assert(y+1 < GZ-1); } cost[sy][sx][EE] = 1; prune_dead(); propagate_costs(); prune_subopt(); p1 = minat(ex, ey) -1; /* costs[] values start at 1! */ for (y=1; y<GZ-1; y++) for (x=1; x<GZ-1; x++) p2 += minat(x,y) > 0; printf("16: %d %d\n", p1, p2); return 0; }
Very interesting approach. Pruning deadends by spawning additional walls is a very clever idea.
C#
Ended up modifying part 1 to do part 2 and return both answers at once.
using System.Collections.Immutable; using System.Diagnostics; using Common; namespace Day16; static class Program { static void Main() { var start = Stopwatch.GetTimestamp(); var smallInput = Input.Parse("smallsample.txt"); var sampleInput = Input.Parse("sample.txt"); var programInput = Input.Parse("input.txt"); Console.WriteLine($"Part 1 small: {Solve(smallInput)}"); Console.WriteLine($"Part 1 sample: {Solve(sampleInput)}"); Console.WriteLine($"Part 1 input: {Solve(programInput)}"); Console.WriteLine($"That took about {Stopwatch.GetElapsedTime(start)}"); } static (int part1, int part2) Solve(Input i) { State? endState = null; Dictionary<(Point, int), int> lowestScores = new(); var queue = new Queue<State>(); queue.Enqueue(new State(i.Start, 1, 0, ImmutableHashSet<Point>.Empty)); while (queue.TryDequeue(out var state)) { if (ElementAt(i.Map, state.Location) is '#') { continue; } if (lowestScores.TryGetValue((state.Location, state.DirectionIndex), out var lowestScoreSoFar)) { if (state.Score > lowestScoreSoFar) continue; } lowestScores[(state.Location, state.DirectionIndex)] = state.Score; var nextStatePoints = state.Points.Add(state.Location); if (state.Location == i.End) { if ((endState is null) || (state.Score < endState.Score)) endState = state with { Points = nextStatePoints }; else if (state.Score == endState.Score) endState = state with { Points = nextStatePoints.Union(endState.Points) }; continue; } // Walk forward queue.Enqueue(state with { Location = state.Location.Move(CardinalDirections[state.DirectionIndex]), Score = state.Score + 1, Points = nextStatePoints, }); // Turn clockwise queue.Enqueue(state with { DirectionIndex = (state.DirectionIndex + 1) % CardinalDirections.Length, Score = state.Score + 1000, Points = nextStatePoints, }); // Turn counter clockwise queue.Enqueue(state with { DirectionIndex = (state.DirectionIndex + CardinalDirections.Length - 1) % CardinalDirections.Length, Score = state.Score + 1000, Points = nextStatePoints, }); } if (endState is null) throw new Exception("No end state found!"); return (endState.Score, endState.Points.Count); } public static void DumpMap(Input i, ISet<Point>? points, Point current) { for (int row = 0; row < i.Bounds.Row; row++) { for (int col = 0; col < i.Bounds.Col; col++) { var p = new Point(row, col); Console.Write( (p == current) ? 'X' : (points?.Contains(p) ?? false) ? 'O' : ElementAt(i.Map, p)); } Console.WriteLine(); } Console.WriteLine(); } public static char ElementAt(string[] map, Point location) => map[location.Row][location.Col]; public record State(Point Location, int DirectionIndex, int Score, ImmutableHashSet<Point> Points); public static readonly Direction[] CardinalDirections = [Direction.Up, Direction.Right, Direction.Down, Direction.Left]; } public class Input { public string[] Map { get; init; } = []; public Point Start { get; init; } = new(-1, -1); public Point End { get; init; } = new(-1, -1); public Point Bounds => new(this.Map.Length, this.Map[0].Length); public static Input Parse(string file) { var map = File.ReadAllLines(file); Point start = new(-1, -1), end = new(-1, -1); foreach (var p in map .SelectMany((line, i) => new [] { new Point(i, line.IndexOf('S')), new Point(i, line.IndexOf('E')), }) .Where(p => p.Col >= 0) .Take(2)) { if (map[p.Row][p.Col] is 'S') start = p; else end = p; } return new Input() { Map = map, Start = start, End = end, }; } }
Haskell
code
import Control.Arrow import Control.Monad import Control.Monad.RWS import Control.Monad.Trans.Maybe import Data.Array.Unboxed import Data.List import Data.Map qualified as M import Data.Maybe import Data.Set qualified as S data Dir = N | S | W | E deriving (Show, Eq, Ord) type Maze = UArray Pos Char type Pos = (Int, Int) type Node = (Pos, Dir) type CostNode = (Int, Node) type Problem = RWS Maze [(Node, [Node])] (M.Map Node Int, S.Set (CostNode, Maybe Node)) parse = toMaze . lines toMaze :: [String] -> Maze toMaze b = listArray ((0, 0), (n - 1, m - 1)) $ concat b where n = length b m = length $ head b next :: Int -> (Pos, Dir) -> Problem [CostNode] next c (p, d) = do m <- ask let straigth = fmap ((1,) . (,d)) . filter ((/= '#') . (m !)) . return $ move d p turn = (1000,) . (p,) <$> rot d return $ first (+ c) <$> straigth ++ turn move N = first (subtract 1) move S = first (+ 1) move W = second (subtract 1) move E = second (+ 1) rot d | d `elem` [N, S] = [E, W] | otherwise = [N, S] dijkstra :: MaybeT Problem () dijkstra = do m <- ask visited <- gets fst Just (((cost, vertex@(p, _)), father), queue) <- gets (S.minView . snd) let (prevCost, visited') = M.insertLookupWithKey (\_ a _ -> a) vertex cost visited case prevCost of Nothing -> do queue' <- lift $ foldr S.insert queue <$> (fmap (,Just vertex) <$> next cost vertex) put (visited', queue') tell [(vertex, maybeToList father)] Just c -> do if c == cost then tell [(vertex, maybeToList father)] else guard $ m ! p /= 'E' put (visited, queue) dijkstra solve b = do start <- getStart b end <- getEnd b let ((m, _), w) = execRWS (runMaybeT dijkstra) b (M.empty, S.singleton (start, Nothing)) parents = M.fromListWith (++) w endDirs = (end,) <$> [N, S, E, W] min = minimum $ mapMaybe (`M.lookup` m) endDirs ends = filter ((== Just min) . (`M.lookup` m)) endDirs part2 = S.size . S.fromList . fmap fst . concat . takeWhile (not . null) $ iterate (>>= flip (M.findWithDefault []) parents) ends return (min, part2) getStart :: Maze -> Maybe CostNode getStart = fmap ((0,) . (,E) . fst) . find ((== 'S') . snd) . assocs getEnd :: Maze -> Maybe Pos getEnd = fmap fst . find ((== 'E') . snd) . assocs main = getContents >>= print . solve . parse
Haskell
This one was surprisingly slow to run
Big codeblock
import Control.Arrow import Data.Map (Map) import Data.Set (Set) import Data.Array.ST (STArray) import Data.Array (Array) import Control.Monad.ST (ST, runST) import qualified Data.Char as Char import qualified Data.List as List import qualified Data.Map as Map import qualified Data.Set as Set import qualified Data.Array.ST as MutableArray import qualified Data.Array as Array import qualified Data.Maybe as Maybe data Direction = East | West | South | North deriving (Show, Eq, Ord) data MazeTile = Start | End | Wall | Unknown | Explored (Map Direction ExplorationScore) deriving Eq -- instance Show MazeTile where -- show Wall = "#" -- show Start = "S" -- show End = "E" -- show Unknown = "." -- show (Explored (East, _)) = ">" -- show (Explored (South, _)) = "v" -- show (Explored (West, _)) = "<" -- show (Explored (North, _)) = "^" type Position = (Int, Int) type ExplorationScore = Int translate '#' = Wall translate '.' = Unknown translate 'S' = Start translate 'E' = End parse :: String -> Array (Int, Int) MazeTile parse s = Array.listArray ((1, 1), (height - 1, width)) . map translate . filter (/= '\n') $ s where width = length . takeWhile (/= '\n') $ s height = length . filter (== '\n') $ s (a1, b1) .+. (a2, b2) = (a1+a2, b1+b2) (a1, b1) .-. (a2, b2) = (a1-a2, b1-b2) directions = [East, West, South, North] directionVector East = (0, 1) directionVector West = (0, -1) directionVector North = (-1, 0) directionVector South = ( 1, 0) turnRight East = South turnRight South = West turnRight West = North turnRight North = East walkableNeighbors a p = do let neighbors = List.map ((.+. p) . directionVector) directions tiles <- mapM (MutableArray.readArray a) neighbors let neighborPosition = List.map fst . List.filter ((/= Wall). snd) . zip neighbors $ tiles return $ neighborPosition findDeadEnds a = Array.assocs >>> List.filter (snd >>> (== Unknown)) >>> List.map (fst) >>> List.filter (isDeadEnd a) $ a isDeadEnd a p = List.map directionVector >>> List.map (.+. p) >>> List.map (a Array.!) >>> List.filter (/= Wall) >>> List.length >>> (== 1) $ directions fillDeadEnds :: Array (Int, Int) MazeTile -> ST s (Array (Int, Int) MazeTile) fillDeadEnds a = do ma <- MutableArray.thaw a let deadEnds = findDeadEnds a mapM_ (fillDeadEnd ma) deadEnds MutableArray.freeze ma fillDeadEnd :: STArray s (Int, Int) MazeTile -> Position -> ST s () fillDeadEnd a p = do MutableArray.writeArray a p Wall p' <- walkableNeighbors a p >>= return . head t <- MutableArray.readArray a p' n <- walkableNeighbors a p' >>= return . List.length if n == 1 && t == Unknown then fillDeadEnd a p' else return () thawArray :: Array (Int, Int) MazeTile -> ST s (STArray s (Int, Int) MazeTile) thawArray a = do a' <- MutableArray.thaw a return a' solveMaze a = do a' <- fillDeadEnds a a'' <- thawArray a' let s = Array.assocs >>> List.filter ((== Start) . snd) >>> Maybe.listToMaybe >>> Maybe.maybe (error "Start not in map") fst $ a let e = Array.assocs >>> List.filter ((== End) . snd) >>> Maybe.listToMaybe >>> Maybe.maybe (error "End not in map") fst $ a MutableArray.writeArray a'' s $ Explored (Map.singleton East 0) MutableArray.writeArray a'' e $ Unknown solveMaze' (s, East) a'' fa <- MutableArray.freeze a'' t <- MutableArray.readArray a'' e case t of Wall -> error "Unreachable code" Start -> error "Unreachable code" End -> error "Unreachable code" Unknown -> error "End was not explored yet" Explored m -> return (List.minimum . List.map snd . Map.toList $ m, countTiles fa s e) countTiles a s p = Set.size . countTiles' a s p $ South countTiles' :: Array (Int, Int) MazeTile -> Position -> Position -> Direction -> Set Position countTiles' a s p d | p == s = Set.singleton p | otherwise = Set.unions . List.map (Set.insert p) . List.map (uncurry (countTiles' a s)) $ (zip minCostNeighbors minCostDirections) where minCostNeighbors = List.map ((p .-.) . directionVector) minCostDirections minCostDirections = List.map fst . List.filter ((== minCost) . snd) . Map.toList $ visits visits = case a Array.! p of Explored m -> Map.adjust (+ (-1000)) d m minCost = List.minimum . List.map snd . Map.toList $ visits maybeExplore c p d a = do t <- MutableArray.readArray a p case t of Wall -> return () Start -> error "Unreachable code" End -> error "Unreachable code" Unknown -> do MutableArray.writeArray a p $ Explored (Map.singleton d c) solveMaze' (p, d) a Explored m -> do let c' = Maybe.maybe c id (m Map.!? d) if c <= c' then do let m' = Map.insert d c m MutableArray.writeArray a p (Explored m') solveMaze' (p, d) a else return () solveMaze' :: (Position, Direction) -> STArray s (Int, Int) MazeTile -> ST s () solveMaze' s@(p, d) a = do t <- MutableArray.readArray a p case t of Wall -> return () Start -> error "Unreachable code" End -> error "Unreachable code" Unknown -> error "Starting on unexplored field" Explored m -> do let c = m Map.! d maybeExplore (c+1) (p .+. directionVector d) d a let d' = turnRight d maybeExplore (c+1001) (p .+. directionVector d') d' a let d'' = turnRight d' maybeExplore (c+1001) (p .+. directionVector d'') d'' a let d''' = turnRight d'' maybeExplore (c+1001) (p .+. directionVector d''') d''' a part1 a = runST (solveMaze a) main = getContents >>= print . part1 . parse
Uiua
Uiua’s new builtin
path
operator makes this a breeze. Given a function that returns valid neighbours for a point and their relative costs, and another function to test whether you have reached a valid goal, it gives the minimal cost, and all relevant paths. We just need to keep track of the current direction as we work through the maze.(edit: forgot the Try It Live! link)
Data ← ≡°□°/$"_\n_" "#################\n#...#...#...#..E#\n#.#.#.#.#.#.#.#^#\n#.#.#.#...#...#^#\n#.#.#.#.###.#.#^#\n#>>v#.#.#.....#^#\n#^#v#.#.#.#####^#\n#^#v..#.#.#>>>>^#\n#^#v#####.#^###.#\n#^#v#..>>>>^#...#\n#^#v###^#####.###\n#^#v#>>^#.....#.#\n#^#v#^#####.###.#\n#^#v#^........#.#\n#^#v#^#########.#\n#S#>>^..........#\n#################" D₄ ← [1_0 ¯1_0 0_1 0_¯1] End ← ⊢⊚=@EData Costs ← :∩▽⟜:≡(≠@#⊡:Data⊢).≡⊟⊙⟜(+1×1000¬≡/×=)+⟜:D₄∩¤°⊟ path(Costs|≍End⊙◌°⊟)⊟:1_0⊢⊚=@SData &p ⧻◴≡⊢/◇⊂ &p :
uiua is insane, such small code footprint to fit a maze solver like algo
Haskell
Rather busy today so late and somewhat messy! (Probably the same tomorrow…)
import Data.List import Data.Map (Map) import Data.Map qualified as Map import Data.Maybe import Data.Set (Set) import Data.Set qualified as Set readInput :: String -> Map (Int, Int) Char readInput s = Map.fromList [((i, j), c) | (i, l) <- zip [0 ..] (lines s), (j, c) <- zip [0 ..] l] bestPath :: Map (Int, Int) Char -> (Int, Set (Int, Int)) bestPath maze = go (Map.singleton start (0, Set.singleton startPos)) (Set.singleton start) where start = (startPos, (0, 1)) walls = Map.keysSet $ Map.filter (== '#') maze [Just startPos, Just endPos] = map (\c -> fst <$> find ((== c) . snd) (Map.assocs maze)) ['S', 'E'] go best edge | Set.null edge = Map.mapKeysWith mergePaths fst best Map.! endPos | otherwise = let nodes' = filter (\(x, (c, _)) -> maybe True ((c <=) . fst) $ best Map.!? x) $ concatMap (step . (\x -> (x, best Map.! x))) (Set.elems edge) best' = foldl' (flip $ uncurry $ Map.insertWith mergePaths) best nodes' in go best' $ Set.fromList (map fst nodes') step ((p@(i, j), d@(di, dj)), (cost, path)) = let rots = [((p, d'), (cost + 1000, path)) | d' <- [(-dj, di), (dj, -di)]] moves = [ ((p', d), (cost + 1, Set.insert p' path)) | let p' = (i + di, j + dj), p `Set.notMember` walls ] in moves ++ rots mergePaths a@(c1, p1) b@(c2, p2) = case compare c1 c2 of LT -> a GT -> b EQ -> (c1, Set.union p1 p2) main = do (score, visited) <- bestPath . readInput <$> readFile "input16" print score print (Set.size visited)
Dart
I liked the flexibility of the
path
operator in the Uiua solution so much that I built a similar search function in Dart. Not quite as compact, but still an interesting piece of code that I will keep on hand for other path-finding puzzles.About 80 lines of code, about half of which is the super-flexible search function.
import 'dart:math'; import 'package:collection/collection.dart'; import 'package:more/more.dart'; List<Point<num>> d4 = [Point(1, 0), Point(-1, 0), Point(0, 1), Point(0, -1)]; /// Returns cost to destination, plus list of routes to destination. /// Does Dijkstra/A* search depending on whether heuristic returns 1 or /// something better. (num, List<List<T>>) aStarSearch<T>(T start, Map<T, num> Function(T) fNext, int Function(T) fHeur, bool Function(T) fAtEnd) { var cameFrom = SetMultimap<T, T>.fromEntries([MapEntry(start, start)]); var ends = <T>{}; var front = PriorityQueue<T>((a, b) => fHeur(a).compareTo(fHeur(b))) ..add(start); var cost = <T, num>{start: 0}; while (front.isNotEmpty) { var here = front.removeFirst(); if (fAtEnd(here)) { ends.add(here); continue; } var ns = fNext(here); for (var n in ns.keys) { var nCost = cost[here]! + ns[n]!; if (!cost.containsKey(n) || nCost < cost[n]!) { cost[n] = nCost; front.add(n); cameFrom.removeAll(n); } if (cost[n] == nCost) cameFrom[n].add(here); } } Iterable<List<T>> routes(T h) sync* { if (h == start) { yield [h]; return; } for (var p in cameFrom[h]) { yield* routes(p).map((e) => e + [h]); } } var minCost = ends.map((e) => cost[e]!).min; ends = ends.where((e) => cost[e]! == minCost).toSet(); return (minCost, ends.fold([], (s, t) => s..addAll(routes(t).toList()))); } typedef PP = (Point, Point); (num, List<List<PP>>) solve(List<String> lines) { var grid = { for (var r in lines.indexed()) for (var c in r.value.split('').indexed().where((e) => e.value != '#')) Point<num>(c.index, r.index): c.value }; var start = grid.entries.firstWhere((e) => e.value == 'S').key; var end = grid.entries.firstWhere((e) => e.value == 'E').key; var dir = Point<num>(1, 0); fHeur(PP pd) => 1; // faster than euclidean distance. fNextAndCost(PP pd) => <PP, int>{ for (var n in d4 .where((n) => n != pd.last * -1 && grid.containsKey(pd.first + n))) (pd.first + n, n): ((n == pd.last) ? 1 : 1001) // (Point, Dir) : Cost }; fAtEnd(PP pd) => pd.first == end; return aStarSearch<PP>((start, dir), fNextAndCost, fHeur, fAtEnd); } part1(List<String> lines) => solve(lines).first; part2(List<String> lines) => solve(lines) .last .map((l) => l.map((e) => e.first).toSet()) .flattenedToSet .length;
Rust
Not sure if I should dump my full solution, its quite long. If its too long I’ll delete it. Way over-engineered, and performs like it as well, quite slow.
Quite proud of my hack for pt2. I walk back along the path, which is nothing special. But because of the turn costs, whenever a turn joins a straight, it makes the straight discontinuous:
###### 11043 ###### 10041 10042 ###### ###### 11041 ######
So I check the before and after cells, and make sure the previous is already marked as a short path, and check the after cell, to make sure its 2 steps apart, and ignore the middle. Dunno if anyone else has done the same thing, I’ve mostly managed to avoid spoilers today.
code
#[cfg(test)] mod tests { use crate::day_16::tests::State::{CELL, END, SHORTPATH, START, WALL}; use std::cmp::PartialEq; fn get_cell(board: &[Vec<MazeCell>], row: isize, col: isize) -> &MazeCell { &board[row as usize][col as usize] } fn set_cell(board: &mut [Vec<MazeCell>], value: &MazeStep) { let cell = &mut board[value.i as usize][value.j as usize]; cell.dir = value.dir; cell.cost = value.cost; cell.state = value.state.clone(); } fn find_cell(board: &mut [Vec<MazeCell>], state: State) -> (isize, isize) { for i in 0..board.len() { for j in 0..board[i].len() { if get_cell(board, i as isize, j as isize).state == state { return (i as isize, j as isize); } } } unreachable!(); } static DIRECTIONS: [(isize, isize); 4] = [(0, 1), (1, 0), (0, -1), (-1, 0)]; #[derive(PartialEq, Debug, Clone)] enum State { CELL, WALL, START, END, SHORTPATH, } struct MazeCell { dir: i8, cost: isize, state: State, } struct MazeStep { i: isize, j: isize, dir: i8, cost: isize, state: State, } fn walk_maze(board: &mut [Vec<MazeCell>]) -> isize { let start = find_cell(board, START); let mut moves = vec![MazeStep { i: start.0, j: start.1, cost: 0, dir: 0, state: START, }]; let mut best = isize::MAX; loop { if moves.is_empty() { break; } let cell = moves.pop().unwrap(); let current_cost = get_cell(board, cell.i, cell.j); if current_cost.state == END { if cell.cost < best { best = cell.cost; } continue; } if current_cost.state == WALL { continue; } if current_cost.cost < cell.cost { continue; } set_cell(board, &cell); for (i, dir) in DIRECTIONS.iter().enumerate() { let cost = match (i as i8) - cell.dir { 0 => cell.cost + 1, -2 | 2 => continue, _ => cell.cost + 1001, }; moves.push(MazeStep { i: cell.i + dir.0, j: cell.j + dir.1, dir: i as i8, cost, state: State::CELL, }); } } best } fn unwalk_path(board: &mut [Vec<MazeCell>], total_cost: isize) -> usize { let end = find_cell(board, END); let mut cells = vec![MazeStep { i: end.0, j: end.1, dir: 0, cost: total_cost, state: State::END, }]; set_cell(board, &cells[0]); while let Some(mut cell) = cells.pop() { for dir in DIRECTIONS { let next_cell = get_cell(board, cell.i + dir.0, cell.j + dir.1); if next_cell.cost == 0 { continue; } if next_cell.state == WALL { continue; } if next_cell.state == CELL && (next_cell.cost == &cell.cost - 1001 || next_cell.cost == &cell.cost - 1) { cells.push(MazeStep { i: cell.i + dir.0, j: cell.j + dir.1, dir: 0, cost: next_cell.cost, state: CELL, }); } else { let prev_cell = get_cell(board, cell.i - dir.0, cell.j - dir.1); if prev_cell.state == SHORTPATH && prev_cell.cost - 2 == next_cell.cost { cells.push(MazeStep { i: cell.i + dir.0, j: cell.j + dir.1, dir: 0, cost: next_cell.cost, state: CELL, }); } } } cell.state = SHORTPATH; set_cell(board, &cell); } let mut count = 0; for row in board { for cell in row { if cell.state == SHORTPATH { count += 1; } if cell.state == END { count += 1; } if cell.state == START { count += 1; } } } count } #[test] fn day15_part2_test() { let input = std::fs::read_to_string("src/input/day_16.txt").unwrap(); let mut board = input .split('\n') .map(|line| { line.chars() .map(|c| match c { '#' => MazeCell { dir: 0, cost: isize::MAX, state: WALL, }, 'S' => MazeCell { dir: 0, cost: isize::MAX, state: START, }, 'E' => MazeCell { dir: 0, cost: isize::MAX, state: END, }, '.' => MazeCell { dir: 0, cost: isize::MAX, state: CELL, }, _ => unreachable!(), }) .collect::<Vec<MazeCell>>() }) .collect::<Vec<Vec<MazeCell>>>(); let cost = walk_maze(&mut board); let count = unwalk_path(&mut board, cost); println!("{count}"); } }
Rust
Dijkstra’s algorithm. While the actual shortest path was not needed in part 1, only the distance, in part 2 the path is saved in the parent hashmap, and crucially, if we encounter two paths with the same distance, both parent nodes are saved. This ensures we end up with all shortest paths in the end.
Solution
use std::cmp::{Ordering, Reverse}; use euclid::{default::*, vec2}; use priority_queue::PriorityQueue; use rustc_hash::{FxHashMap, FxHashSet}; const DIRS: [Vector2D<i32>; 4] = [vec2(1, 0), vec2(0, 1), vec2(-1, 0), vec2(0, -1)]; type Node = (Point2D<i32>, u8); fn parse(input: &str) -> (Vec<Vec<bool>>, Point2D<i32>, Point2D<i32>) { let mut start = None; let mut end = None; let mut field = Vec::new(); for (y, l) in input.lines().enumerate() { let mut row = Vec::new(); for (x, b) in l.bytes().enumerate() { if b == b'S' { start = Some(Point2D::new(x, y).to_i32()); } else if b == b'E' { end = Some(Point2D::new(x, y).to_i32()); } row.push(b == b'#'); } field.push(row); } (field, start.unwrap(), end.unwrap()) } fn adj(field: &[Vec<bool>], (v, dir): Node) -> Vec<(Node, u32)> { let mut adj = Vec::with_capacity(3); let next = v + DIRS[dir as usize]; if !field[next.y as usize][next.x as usize] { adj.push(((next, dir), 1)); } adj.push(((v, (dir + 1) % 4), 1000)); adj.push(((v, (dir + 3) % 4), 1000)); adj } fn shortest_path_length(field: &[Vec<bool>], start: Node, end: Point2D<i32>) -> u32 { let mut dist: FxHashMap<Node, u32> = FxHashMap::default(); dist.insert(start, 0); let mut pq: PriorityQueue<Node, Reverse<u32>> = PriorityQueue::new(); pq.push(start, Reverse(0)); while let Some((v, _)) = pq.pop() { for (w, weight) in adj(field, v) { let dist_w = dist.get(&w).copied().unwrap_or(u32::MAX); let new_dist = dist[&v] + weight; if dist_w > new_dist { dist.insert(w, new_dist); pq.push_increase(w, Reverse(new_dist)); } } } // Shortest distance to end, regardless of final direction (0..4).map(|dir| dist[&(end, dir)]).min().unwrap() } fn part1(input: String) { let (field, start, end) = parse(&input); let distance = shortest_path_length(&field, (start, 0), end); println!("{distance}"); } fn shortest_path_tiles(field: &[Vec<bool>], start: Node, end: Point2D<i32>) -> u32 { let mut parents: FxHashMap<Node, Vec<Node>> = FxHashMap::default(); let mut dist: FxHashMap<Node, u32> = FxHashMap::default(); dist.insert(start, 0); let mut pq: PriorityQueue<Node, Reverse<u32>> = PriorityQueue::new(); pq.push(start, Reverse(0)); while let Some((v, _)) = pq.pop() { for (w, weight) in adj(field, v) { let dist_w = dist.get(&w).copied().unwrap_or(u32::MAX); let new_dist = dist[&v] + weight; match dist_w.cmp(&new_dist) { Ordering::Greater => { parents.insert(w, vec![v]); dist.insert(w, new_dist); pq.push_increase(w, Reverse(new_dist)); } // Remember both parents if distance is equal Ordering::Equal => parents.get_mut(&w).unwrap().push(v), Ordering::Less => {} } } } let mut path_tiles: FxHashSet<Point2D<i32>> = FxHashSet::default(); path_tiles.insert(end); // Shortest distance to end, regardless of final direction let shortest_dist = (0..4).map(|dir| dist[&(end, dir)]).min().unwrap(); for dir in 0..4 { if dist[&(end, dir)] == shortest_dist { collect_tiles(&parents, &mut path_tiles, (end, dir)); } } path_tiles.len() as u32 } fn collect_tiles( parents: &FxHashMap<Node, Vec<Node>>, tiles: &mut FxHashSet<Point2D<i32>>, cur: Node, ) { if let Some(pars) = parents.get(&cur) { for p in pars { tiles.insert(p.0); collect_tiles(parents, tiles, *p); } } } fn part2(input: String) { let (field, start, end) = parse(&input); let tiles = shortest_path_tiles(&field, (start, 0), end); println!("{tiles}"); } util::aoc_main!();
Also on github
C#
using QuickGraph; using QuickGraph.Algorithms.ShortestPath; namespace aoc24; [ForDay(16)] public class Day16 : Solver { private string[] data; private int width, height; private int start_x, start_y; private int end_x, end_y; private readonly (int, int)[] directions = [(1, 0), (0, 1), (-1, 0), (0, -1)]; private record class Edge((int, int, int) Source, (int, int, int) Target) : IEdge<(int, int, int)>; private DelegateVertexAndEdgeListGraph<(int, int, int), Edge> graph; private AStarShortestPathAlgorithm<(int, int, int), Edge> search; private long min_distance; private List<(int, int, int)> min_distance_targets; public void Presolve(string input) { data = input.Trim().Split("\n"); width = data[0].Length; height = data.Length; for (int i = 0; i < width; i++) { for (int j = 0; j < height; j++) { if (data[j][i] == 'S') { start_x = i; start_y = j; } else if (data[j][i] == 'E') { end_x = i; end_y = j; } } } graph = MakeGraph(); var start = (start_x, start_y, 0); search = new AStarShortestPathAlgorithm<(int, int, int), Edge>( graph, edge => edge.Source.Item3 == edge.Target.Item3 ? 1 : 1000, vertex => Math.Abs(vertex.Item1 - start_x) + Math.Abs(vertex.Item2 - start_y) + 1000 * Math.Min(vertex.Item3, 4 - vertex.Item3) ); Dictionary<(int, int, int), long> distances = []; search.SetRootVertex(start); search.ExamineVertex += vertex => { if (vertex.Item1 == end_x && vertex.Item2 == end_y) { distances[vertex] = (long)search.Distances[vertex]; } }; search.Compute(); min_distance = distances.Values.Min(); min_distance_targets = distances.Keys.Where(v => distances[v] == min_distance).ToList(); } private DelegateVertexAndEdgeListGraph<(int, int, int), Edge> MakeGraph() => new(GetAllVertices(), GetOutEdges); private bool GetOutEdges((int, int, int) arg, out IEnumerable<Edge> result_enumerable) { List<Edge> result = []; var (x, y, dir) = arg; result.Add(new Edge(arg, (x, y, (dir + 1) % 4))); result.Add(new Edge(arg, (x, y, (dir + 3) % 4))); var (tx, ty) = (x + directions[dir].Item1, y + directions[dir].Item2); if (data[ty][tx] != '#') result.Add(new Edge(arg, (tx, ty, dir))); result_enumerable = result; return true; } private IEnumerable<(int, int, int)> GetAllVertices() { for (int i = 0; i < width; i++) { for (int j = 0; j < height; j++) { if (data[j][i] == '#') continue; yield return (i, j, 0); yield return (i, j, 1); yield return (i, j, 2); yield return (i, j, 3); } } } private HashSet<(int, int, int)> GetMinimumPathNodesTo((int, int, int) vertex) { var (x, y, dir) = vertex; if (x == start_x && y == start_y && dir == 0) return [vertex]; if (!search.Distances.TryGetValue(vertex, out var distance_to_me)) return []; List<(int, int, int)> candidates = [ (x, y, (dir + 1) % 4), (x, y, (dir + 3) % 4), (x - directions[dir].Item1, y - directions[dir].Item2, dir), ]; HashSet<(int, int, int)> result = [vertex]; foreach (var (cx, cy, cdir) in candidates) { if (!search.Distances.TryGetValue((cx, cy, cdir), out var distance_to_candidate)) continue; if (distance_to_candidate > distance_to_me - (dir == cdir ? 1 : 1000)) continue; result = result.Union(GetMinimumPathNodesTo((cx, cy, cdir))).ToHashSet(); } return result; } public string SolveFirst() => min_distance.ToString(); public string SolveSecond() => min_distance_targets .SelectMany(v => GetMinimumPathNodesTo(v)) .Select(vertex => (vertex.Item1, vertex.Item2)) .ToHashSet() .Count .ToString(); }
Javascript
So my friend tells me my solution is close to Dijkstra but honestly I just tried what made sense until it worked. I originally wanted to just bruteforce it and get every single possible path explored but uh… Yeah that wasn’t gonna work, I terminated that one after 1B operations.
I created a class to store the state of the current path being explored, and basically just clone it, sending it in each direction (forward, 90 degrees, -90 degrees), then queue it up if it didn’t fail. Using a priority queue (array based) to store them, I inverted it for the second answer to reduce the memory footprint (though ultimately once I fixed the issue with the algorithm, which turned out to just be a less than or equal to that should have been a less than, I didn’t really need this).
Part two “only” took 45 seconds to run on my Thinkpad P14 Gen1.
My code was too powerful for Lemmy (or verbose): https://blocks.programming.dev/Zikeji/ae06ca1ca88649c99581eefce97a708e
Python
Part 1: Run Dijkstra’s algorithm to find shortest path.
I chose to represent nodes using the location
(i, j)
as well as the directiondir
faced by the reindeer.
Initially I tried creating the complete adjacency graph but that lead to max recursion so I ended up populating graph for only the nodes I was currently exploring.Part 2: Track paths while performing Dijkstra’s algorithm.
First, I modified the algorithm to look through neighbors with equal cost along with the ones with lesser cost, so that it would go through all shortest paths.
Then, I keep track of the list of previous nodes for every node explored.
Finally, I use those lists to run through the paths backwards, taking note of all unique locations.Code:
import os # paths here = os.path.dirname(os.path.abspath(__file__)) filepath = os.path.join(here, "input.txt") # read input with open(filepath, mode="r", encoding="utf8") as f: data = f.read() from collections import defaultdict from dataclasses import dataclass import heapq as hq import math # up, right, down left DIRECTIONS = [(-1, 0), (0, 1), (1, 0), (0, -1)] # Represent a node using its location and the direction @dataclass(frozen=True) class Node: i: int j: int dir: int maze = data.splitlines() m, n = len(maze), len(maze[0]) # we always start from bottom-left corner (facing east) start_node = Node(m - 2, 1, 1) # we always end in top-right corner (direction doesn't matter) end_node = Node(1, n - 2, -1) # the graph will be updated lazily because it is too much processing # to completely populate it beforehand graph = defaultdict(list) # track nodes whose all edges have been explored visited = set() # heap to choose next node to explore # need to add id as middle tuple element so that nodes dont get compared min_heap = [(0, id(start_node), start_node)] # min distance from start_node to node so far # missing values are treated as math.inf min_dist = {} min_dist[start_node] = 0 # keep track of all previous nodes for making path prev_nodes = defaultdict(list) # utility method for debugging (prints the map) def print_map(current_node, prev_nodes): pns = set((n.i, n.j) for n in prev_nodes) for i in range(m): for j in range(n): if i == current_node.i and j == current_node.j: print("X", end="") elif (i, j) in pns: print("O", end="") else: print(maze[i][j], end="") print() # Run Dijkstra's algo while min_heap: cost_to_node, _, node = hq.heappop(min_heap) if node in visited: continue visited.add(node) # early exit in the case we have explored all paths to the finish if node.i == end_node.i and node.j == end_node.j: # assign end so that we know which direction end was reached by end_node = node break # update adjacency graph from current node di, dj = DIRECTIONS[node.dir] if maze[node.i + di][node.j + dj] != "#": moved_node = Node(node.i + di, node.j + dj, node.dir) graph[node].append((moved_node, 1)) for x in range(3): rotated_node = Node(node.i, node.j, (node.dir + x + 1) % 4) graph[node].append((rotated_node, 1000)) # explore edges for neighbor, cost in graph[node]: cost_to_neighbor = cost_to_node + cost # The following condition was changed from > to >= because we also want to explore # paths with the same cost, not just better cost if min_dist.get(neighbor, math.inf) >= cost_to_neighbor: min_dist[neighbor] = cost_to_neighbor prev_nodes[neighbor].append(node) # need to add id as middle tuple element so that nodes dont get compared hq.heappush(min_heap, (cost_to_neighbor, id(neighbor), neighbor)) print(f"Part 1: {min_dist[end_node]}") # PART II: Run through the path backwards, making note of all coords visited = set([start_node]) path_locs = set([(start_node.i, start_node.j)]) # all unique locations in path stack = [end_node] while stack: node = stack.pop() if node in visited: continue visited.add(node) path_locs.add((node.i, node.j)) for prev_node in prev_nodes[node]: stack.append(prev_node) print(f"Part 2: {len(path_locs)}")
only improvement I can think of is to implement a dead end finder to block for the search algorithm to skip all dead ends that do not have the end tile (“E”). by block I mean artificially add a wall to the entrance of the dead end. this should help make it so that it doesn’t go down dead ends. It would be improbable but there might be an input with a ridiculously long dead end.
Interesting, how would one write such a finder? I can only think of backtracking DFS, but that seems like it would outweigh the savings.
took some time out of my day to implement a solution that beats only running your solution by like 90 ms. This is because the algorithm for filling in all dead ends takes like 9-10 milliseconds and reduces the time it takes your algorithm to solve this by like 95-105 ms!
decent improvement for so many lines of code, but it is what it is. using .index and .rindex on strings is just way too fast. there might be a faster way to replace with
#
or just switch to complete binary bit manipulation for everything, but that is like incredibly difficult to think of rn.but here is the monster script that seemingly does it in ~90 milliseconds faster than your current script version. because it helps eliminated time waste in your Dijkstra’s algorithm and fills all dead ends with minimal impact on performance. Could there be corner cases that I didn’t think of? maybe, but saving time on your algo is better than just trying to be extra sure to eliminate all dead ends, and I am skipping loops because your algorithm will handle that better than trying to do a flood fill type algorithm. (remember first run of a modified script will run a little slow.)
as of rn, the slowest parts of the script is your Dijkstra’s algorithm. I could try to implement my own solver that isn’t piggy-backing off your Dijkstra’s algorithm. however, I think that is just more than I care to do rn. I also was not going to bother with reducing LOC for the giant match case. its fast and serves it purpose good enough.
Those are some really great optimizations, thank you! I understand what you’re doing generally, but I’ll have to step through the code myself to get completely familiar with it.
It’s interesting that string operations win out here over graph algorithms even though this is technically a graph problem. Honestly your write-up and optimizations deserve its own post.
If you are wondering how my string operations is able to be fast, it is because of the simple fact that python’s
index
andrindex
are pactically O(n) time.(which for my use of it after slicing the string, it is closer to O(log(n)) time ) here are some more tricks in case you wish to think about that more. [link] Also, the more verbose option is simply tricks in batch processing. why bother checking each node individually, when we already know that a dead end is simply straight lines?If there was an exceedingly large maze was just a simple two spirals design, where one is a dead end and another has the “end flag” then my batch processing would simply outpace the slower per node iterator. in this scenario, there is a 50/50 chance you pick the right spiral, while it is just easier to look for which one is a dead end and just backtrack to chose the other option. technically it is slower than just guessing correctly first try, but that feels awfully similar to how a bogosort works. you just randomly choose paths(removing previously checked paths) or deterministically enumerate all paths. while a dead end is extremely easy to find and culls all those paths as extremely low priority, or in this spiral scenario, it is the more safe option than accidentally choosing the wrong path.
What would be the fastest would be to simply convert this to bit like representations. each wall could be 1, and empty spots could be 0. would have to be mindful of the location of the start and end separately.
ah yes, I was right. simply string slicing away lines that were checked does make the regex faster. while the code is smaller, it is still slower than the more verbose option. which is only because of the iterative approach of checking each node in the
While(True)
loop, instead of building 2 lists of lines and manipulating them with.index()
and.rindex()
[ Paste ]However, if you notice, even the regex is slower than my iterative approach with index by 3-5 milliseconds. While having one line for the regex is nice, I do think it is harder to read and could prove to be slightly more cumbersome as it could be useless in other types of challenges, while the iterative approach is nice and easily manipulable for most circumstances that may have some quirks.
Also, it shows that the more verbose option is still faster by 7 ms because of the fact that checking each node in the
While(True)
loop is rather slow approach. So really, there is nothing to it overall, and the main slow down is in you solver that I didn’t touch at all, because I wanted to only show the dead end filling part.I tried to compartmentalize it. the search is on its own function, and while that
fill_in_dead_ends
function is extremely large, it is a lot of replicated code. match k case statement could just be removed. A lot of the code is extremely verbose and has an air of being “unrolled” for purposes of me just tweaking each part of the process individually to see what could be done. The entire large af match case all seemingly ended up being very similar code. I could condense it down a lot. however, I know doing so would impact performance unless plenty of time is spent on tweaking it. So unrolled copy pasta was good.The real shining star is the
find_next_dead_end
function because the regex before took 99% of the time of about ~300 ms seconds. Even with this fast iterative function, thefind_next_dead_end
still takes about 75% of the time for the entire thing to finish filling in dead ends. This is because as the search ran deeper into the string, it would start slowing down because it was like O(n*m) time complexity, where n in line width and m is line count being searched through until next match. My approach was to store the relative position for each search which conveniently was thecurr_row,curr_col
. Another aspect to reduce cost/time complexity on the logic that would make sure it filled in newly created dead-ends was to simply check if the current search for the next dead end was from the start after it finished checking the final line. Looking at the line by line profiler from iPython, the entire function spends most of the time at thewhile('.' in r[:first_loc]):
andfirst_loc = r[:first_loc].rindex('.')
which is funny because that is still fast at over 11k+ hits on the same line with only a 5-5.5 microsecond impact for each time it ran the lines.though I could likely remove that strange logic by moving it into the
find_next_dead_end
instead of having that strange if elif else statement in thefill_in_dead_ends
logic.there is so much possible to make it improved, but it was quick and dirty.
Now that I am thinking about it, there would be a way to make the regex faster by simply string slicing lines off the search, so that the regex doesn’t spend time looking at the same start of string.
ah well, my idea is at high level view. Here is a naive approach that should accomplish this. Not sure how else I would accomplish this without more thought put in to make it faster:
edit: whoops, sorry had broke the regex string and had to check for E and S is not deleted lol
This is how the first example would look like:
############### #...#####....E# #.#.#####.###.# #.....###...#.# #.###.#####.#.# #.###.......#.# #.#######.###.# #...........#.# ###.#.#####.#.# #...#.....#.#.# #.#.#.###.#.#.# #.....#...#.#.# #.###.#.#.#.#.# #S###.....#...# ###############
This is how the second example would look like:
################# #...#...#...#..E# #.#.#.#.#.#.#.#.# #.#.#.#...#...#.# #.#.#.#####.#.#.# #...#.###.....#.# #.#.#.###.#####.# #.#...###.#.....# #.#.#####.#.###.# #.#.###.....#...# #.#.###.#####.### #.#.#...###...### #.#.#.#####.##### #.#.#.......##### #.#.#.########### #S#...########### #################
for this challenge, it will only have a more noticeable improvement on larger maps, and especially effective if there are no loops! (i.e. one path) because it would just remove all paths that will lead to a dead end.
For smaller maps, there is no improvement or worse performance as there is not enough dead ends for any search algorithm to waste enough time on. So for more completeness sake, you would make a check to test various sizes with various amount of dead ends and find the optimal map size for where it would make sense to try to fill in all dead ends with walls. Also, when you know a maze would only have one path, then this is more optimal than any path finding algorithm, that is if the map is big enough. That is because you can just find the path fast enough that filling in dead ends is not needed and can just path find it.
for our input, I think this would not help as the map should NOT be large enough. This is naive approach is too costly. It would probably be better if there is a faster approach than this naive approach.
actually, testing this naive approach on the smaller examples, it does have a slight edge over not filling in dead ends. This means that the regex is likely slowing down as the map get larger. so something that can find dead ends faster would be a better choice than the one line regex we have right now.
I guess location of both S and E for the input does matter, because the maze map could end up with S and E being close enough that most, if not all, dead ends are never wasting the time of the Dijkstra’s algorithm. however, my input had S and E being on opposite corners. So the regex is likely the culprit in why the larger map makes filling in dead ends slower.
if you notice from the profiler output, on the smaller examples, the naive approach makes a negligible loss in time and improves the time by a few tenths of a millisecond for your algorithm to do both part1 and part 2. however, on the larger input, the naive approach starts to take a huge hit and loses about 350 ms to 400 ms on filling in dead ends, while only improving the time of your algorithm by 90 ms. while filling in dead ends does improve performance for your algorithm, it just has too much overhead. That means that with a less naive approach, there would be a significant way to improve time on the solving algorithm.
prev_nodes[neighbor].append(node)
I think you’re adding too many neighbours to the prev_nodes here potentially. At the time you explore the edge, you’re not yet sure if the path to the edge’s target via the current node will be the cheapest.
Good catch! IIRC, only when a node is selected from the min heap can we guarantee that the cost to that node will not go any lower. This definitely seems like a bug, but I still got the correct answer for the samples and my input somehow ¯\_(ツ)_/¯