Stream processing engines are an increasingly popular choice for implementing modern AI-powered applications since they enable timely up-to-date responses to be computed from a practically infinite volume of data generated by connected devices, sensors, users, and infrastructure. Whereas computing close to the edge reduces bandwidth, the smaller size of an edge datacenter leads to higher costs for storage and computation relative to the cloud (e.g., AWS Wavelength is 40% more expensive than EC2). It is therefore impractical (and may not be even possible due to limited resources) to run all applications continuously on the edge. Instead, the efficient use of edge computing requires the dynamic reconfiguration of applications based on the workload. Unfortunately, existing stream processing engines are poorly suited for edge applications since they do not handle frequent reconfiguration and replication gracefully leading to significant application stoppage times. This project explores stream processing systems for edge environment which enable seamless application reconfiguration with minimal stoppage time. Our approach uses a network of software routers to transfer data tuples between operators, and implements a late binding approach to routing, where an operator does not need to know the location of the next operator that will consume the tuples that it produces. Instead, tuples are shepherded to their destination. This approach allows for flexible reconfiguration with minimal stoppage time and without requiring global coordination.