Ten engineering-grade modules for a digital municipal ecosystem. Each system links data structures, algorithms, SDGs, efficiency and datasets.
All code in Atharva/
1 • Real-Time Weather Signature Detection
SDG 11 — 11.5
Boyer–Moore · KMP
Arrays · Structures
Process continuous environmental sensor streams and detect hazardous weather signals instantly.
Boyer–Moore enables fast skipping through safe regions, while KMP guarantees worst-case linear-time scanning.
Data Structures:Arrays, Structures — Algorithms:Boyer–Moore, KMP
Leverages the "bad character rule" to skip multiple bytes of clean weather data at once. This sublinear performance is critical for low-power IoT sensors that must process continuous streams without draining battery.
KMPO(N) worst
Used as a failsafe when noise increases. Unlike naive search (O(NM)), KMP guarantees the system never hangs during a high-frequency storm event, ensuring alerts are generated with zero latency.
Search time depends only on the length of the word (K), not the database size (N). This ensures that autocomplete remains instantaneous even as the directory grows to millions of users.
Hash TableO(1) lookup
Once a user selects a skill, fetching the supplier's phone/address is immediate. Hashing decouples the metadata storage from the search logic, keeping the memory footprint optimized.
Combines BST and Heap properties to maintain a probabilistically balanced tree. This prevents the tree from degenerating into a linked list (O(N)) during sequential data entry, keeping asset searches fast.
Min-HeapO(1) find-min
The system must always know the *next* item to expire. The heap structure allows extracting the most urgent item in constant time without sorting the entire inventory database.
Drone-captured counts are partitioned rapidly using QuickSort, while Fenwick Trees compute rolling
congestion indices for anomaly detection.
Traffic Data → QuickSort → Tiers
↘ Fenwick → Trends
Algorithm
Detailed Efficiency Analysis
QuickSortO(N log N)
Chosen for its in-place sorting capability, which minimizes RAM usage on edge devices (drones). It rapidly buckets vehicles into speed tiers with excellent cache locality.
Fenwick TreeO(log N) update
Allows for updating traffic frequencies and calculating prefix sums (congestion levels) simultaneously. This enables real-time "rolling" analytics without re-processing the entire hour's data.
Handle frequent price updates efficiently. Fenwick Trees support prefix averages; Segment Trees detect
volatility spikes using range min/max queries.
Prices → Fenwick → Avg
Segment → Min/Max → Alerts
Structure
Detailed Efficiency Analysis
Segment TreeO(log N) query
Stores range information (min/max price) in internal nodes. This allows the system to query "What was the highest volatility between 10 AM and 2 PM?" instantly, even with millions of ticks.
Fenwick TreeO(log N) update
Used specifically for calculating running averages (prefix sums). It has a lower constant factor than Segment Trees, making it faster for simple accumulation tasks in high-frequency trading.
Build the minimum-cost backbone for warehouse connectivity. Kruskal’s MST reduces overall transportation
cost; union-find speeds up connectivity checks.
Edges → Sort → Kruskal + Union-Find → MST
Algorithm
Detailed Efficiency Analysis
Kruskal's MSTO(E log E)
More efficient than Prim’s algorithm for sparse road networks (where E << V²). It sorts edges by cost to guarantee the absolute minimum budget required to connect all warehouses.
Union-FindO(α(N))
Uses path compression to check if two locations are already connected in near-constant time. This prevents cycle formation instantly without traversing the graph, speeding up the MST construction.
Combine constant-time hashing for lookups with skip lists for ordered traversal/export reports.
Product ID → Hash → Warehouse
→ Skip List (sorted)
Structure
Detailed Efficiency Analysis
Skip ListO(log N)
A probabilistic alternative to balanced trees that is easier to implement concurrently. It allows the system to generate sorted inventory reports for auditing while simultaneously handling insertions.
Hash TableO(1)
Handles the high-volume "Where is Product X?" queries at the loading dock. By bypassing the sorted structure for direct lookups, it prevents log-in delays during peak shipping hours.
Compute complete pairwise shortest-paths for logistics routing.
BFS is used to validate connectivity where needed.
Matrix → Floyd-Warshall → Distance Matrix
↘ BFS
Algorithm
Detailed Efficiency Analysis
Floyd-WarshallO(V³)
While computationally heavy upfront, it pre-calculates the shortest path between *every* possible pair of zones. This allows the dispatch system to answer millions of routing queries in O(1) time during the day.
Adjacency MatrixO(V²) space
Provides direct O(1) access to edge weights, which is faster than iterating lists for dense graphs where almost every distribution center connects to every other one.
Inspectors need shortest-path routing across a sparse city graph.
Dijkstra + binary heap ensures fast, reliable route planning.
Road Graph → Dijkstra(Heap) → Shortest Path
Algorithm
Detailed Efficiency Analysis
Dijkstra (Heap)O(E log V)
Unlike Bellman-Ford, Dijkstra greedily selects the closest node, making it vastly faster for non-negative road weights. The Binary Heap optimizes the "extract-min" operation from O(V) to O(log V).
Sparse GraphOptimization
Since city roads are sparse (avg node degree ~4), using an adjacency list with Dijkstra ensures performance scales with the number of roads (E), not the square of intersections (V²).
Rabin–Karp rolling hashes detect fraudulent batch signatures.
A lookup table links each match to warehouse metadata for recall generation.
Codes → Rabin-Karp → Suspects → Verify → Recall
Algorithm
Detailed Efficiency Analysis
Rabin-KarpRolling Hash
Updates the hash in O(1) time as it slides over the data window. This allows the system to check for multiple fraud patterns simultaneously in a single pass, rather than rescanning for each pattern.
Lookup TableO(1) Verify
Used to instantly verify if a hash match is a true positive or a collision. This ensures that the recall system provides 100% accuracy without the performance penalty of string comparison for every non-match.
Implementation Analysis Table — Data Structures & Algorithms
The table below summarises all Data Structures and Algorithms used across the ten business cases.
Each row indicates whether the structure/algorithm was required, where it was used, and its efficiency characteristics.
Any unused items are marked: No – Not part of system requirements.
Data Structure / Algorithm
Used?
Where Used
Space Efficiency
Time Efficiency
Arrays
Yes
Case 1, 4, 10
O(n)
O(1) access
Structures
Yes
Case 1 (pattern metadata)
O(1)
O(1) field access
List
No – Not part of system requirements
—
—
—
Stack
No – Not part of system requirements
—
—
—
Queue
Yes
Case 8 (BFS)
O(n)
O(1) enqueue/dequeue
Binary Tree
No – Not part of system requirements
—
—
—
Binary Search Tree
Yes
Case 3 (expiry ordering)
O(n)
O(log n) avg ops
AVL Tree
No – Not part of system requirements
—
—
—
2–3 Tree
No – Not part of system requirements
—
—
—
Red-Black Tree
No – Not part of system requirements
—
—
—
Trie
Yes
Case 2 (autocomplete)
O(total chars)
O(k) search
Heap (Min-Heap)
Yes
Case 3, Case 9
O(n)
O(log n) ops
Lookup Table
Yes
Case 10 (batch metadata)
O(n)
O(1) lookup
Sparse Table
No – Not part of system requirements
—
—
—
Fenwick Tree
Yes
Case 4, Case 5
O(n)
O(log n) ops
Segment Tree
Yes
Case 5
O(4n)
O(log n) ops
Skip List
Yes
Case 7
O(n)
O(log n) expected
Union-Find
Yes
Case 6
O(n)
O(α(n)) merges
Hashing
Yes
Case 2, Case 7
O(n)
O(1) avg lookup
DFS
Yes
Case 6
O(V + E)
O(V + E)
BFS
Yes
Case 8
O(V)
O(V + E)
Bubble Sort
No – Not part of system requirements
—
—
—
Selection Sort
No – Not part of system requirements
—
—
—
Insertion Sort
No – Not part of system requirements
—
—
—
Quick Sort
Yes
Case 4
O(1) extra
O(n log n) avg
Merge Sort
Yes
Case 3
O(n)
O(n log n)
Brute-Force Pattern Search
Yes
Case 10 (verification)
O(1)
O(nm) worst
Rabin-Karp
Yes
Case 10
O(1) rolling
O(n + m)
Boyer–Moore
Yes
Case 1
O(m + Σ)
Sublinear avg
Knuth-Morris-Pratt
Yes
Case 1
O(m)
O(n + m)
Heap Sort
No – Not part of system requirements
—
—
—
Kruskal
Yes
Case 6
O(E)
O(E log E)
Prim
No – Not part of system requirements
—
—
—
Dijkstra
Yes
Case 9
O(V)
O(E log V)
Floyd–Warshall
Yes
Case 8
O(n²)
O(n³)
Bellman–Ford
No – Not part of system requirements
—
—
—
Course Learning Reflections
After completing the Design and Analysis of Algorithms course, I can clearly see a shift in how I think about problems and solutions. Before this course, my instinct was to focus on writing code that works and produces the correct output. DAA forced me to pause, step back, and think deeper about how a solution behaves as the input grows and why certain approaches fail under scale. I began to see time and space not as abstract concepts or formulas to memorize, but as real constraints that influence every engineering decision. This change in perspective made me more deliberate and thoughtful while approaching even simple problems.
One of the most impactful parts of the course was learning to trace algorithms and break them down step by step. Initially, this felt slow and frustrating, but over time it trained my mind to be more patient, logical, and precise. Writing and solving recurrence relations helped me understand the hidden cost of recursion and how small design choices can drastically affect performance. Graph-based problems, in particular, pushed me to think systematically and maintain multiple states in my head at once. Through this process, I realized that strong problem solving is less about speed and more about clarity and structured reasoning.
The course also made it clear that algorithms are not about memorizing predefined methods, but about understanding the intuition behind them. I learned to question assumptions, analyze trade-offs, and justify why one approach makes more sense than another in a given context. This helped me move away from a trial-and-error mindset toward a more reasoned and confident approach. I also became more comfortable dealing with ambiguity, where the problem statement does not directly point to a single solution path.
Overall, Design and Analysis of Algorithms has significantly strengthened my analytical thinking and discipline. It has taught me how to approach unfamiliar problems without panic, how to break complexity into manageable parts, and how to defend my choices logically. More than just an academic subject, DAA has shaped the way I think as an engineer, giving me a mindset that I know will carry forward into higher-level courses, interviews, and real-world problem solving.