Echofold Logoechofold
FractalLaunchLoopVibeWorks
Training
AI TrainingAI Startup Training
Events
More
AboutNews
LoginSignup
Echofold Logoechofold

LaunchLoop Wins AWS Breaking Barriers Hackathon with ARANO: Autonomous RAN Optimization Agent

9th November 2025 · 15 min read

Team LaunchLoop receiving first place award at AWS Breaking Barriers Hackathon

Quick Answer

Team LaunchLoop—Kevin Collins (Echofold CEO) and Stephen Dillon—won first place ($5,000 prize) at the AWS Breaking Barriers for Agentic Networks Hackathon (31st October - 2nd November 2025) with ARANO, a production-ready autonomous AI system featuring hierarchical agentic orchestration with 5 primary agents and specialized sub-agents, all powered by Claude 3.5 Sonnet v2 via AWS Bedrock. Built in 48 hours at Dogpatch Labs, Dublin, competing against 377 participants across 68 projects, the system uses Aurora PostgreSQL, digital twin simulation, and multi-layer agent coordination to autonomously manage radio access network optimization.

TL;DR

  • Second consecutive win: Team LaunchLoop secures first place at AWS Breaking Barriers Hackathon ($5,000 prize) just two months after winning the National AI Challenge in September 2025, demonstrating mastery of agentic AI systems
  • ARANO platform: Hierarchical agentic orchestration system with 5 primary agents (Manager, Network, Optimization, Assessor, Fixer) plus specialized sub-agents for Network and Optimization layers, all powered by Claude 3.5 Sonnet v2 via AWS Bedrock
  • 48-hour technical marathon: Working until 3am each night, back up at 7am, including a critical 16-hour database troubleshooting session on Saturday that prevented Phase 2 and Phase 3 implementation
  • Massive dataset engineering: Stephen Dillon built and optimized a 1 billion parameter network coverage dataset, intelligently compressed to 25 million records for real-time agent processing
  • Production AWS architecture: Complete infrastructure with EC2 backend, Aurora PostgreSQL Serverless v2, S3 data storage, Amplify frontend hosting, and digital twin simulation—all deployed in under 48 hours

Most hackathon teams deliver prototypes. Some deliver minimum viable products. Team LaunchLoop delivered a complete, production-ready autonomous AI system in 48 hours.

On 2nd November 2025, Kevin Collins (CEO of Echofold) and Stephen Dillon, representing WorkIQ as Team LaunchLoop, claimed first place at the AWS Breaking Barriers for Agentic Networks Hackathon at Dogpatch Labs, Dublin. The prize: $5,000. The competition: 377 participants across 68 projects, 43 in-person teams, and 25 virtual teams.

Their winning project, ARANO (Autonomous Radio Access Network Optimization), isn't a clever demo or a promising proof-of-concept. It's a sophisticated hierarchical multi-agent AI platform running on enterprise AWS infrastructure, featuring agentic orchestration across 5 primary agents (Manager, Network, Optimization, Assessor, Fixer) with 6 specialized sub-agents each for the Network and Optimization layers (Coverage, Interference, Capacity, Throughput, PCI, Layer Management). All agents run Claude 3.5 Sonnet v2 via AWS Bedrock, coordinating through digital twin simulation, real-time WebSocket communication, and Aurora PostgreSQL Serverless v2—autonomously managing telecommunications networks at scale.

This is Team LaunchLoop's second major hackathon victory in under two months, following their first-place win at Ireland's National AI Challenge in September 2025. But this time, the challenge was different: build an agentic AI system that could operate autonomously in one of the most complex technical domains—radio access network optimization. And they had exactly 48 hours to do it.

AWS Breaking Barriers Hackathon statistics: 377 participants, 68 projects, Team LaunchLoop first place

01The Challenge: 48 Hours to Build Production Infrastructure

The AWS Breaking Barriers for Agentic Networks Hackathon, running from 31st October through 2nd November 2025, challenged teams to build autonomous AI systems—agents that could perceive, reason, plan, and act independently. But Kevin and Stephen didn't aim for a simple chatbot or a basic automation script. They set their sights on one of telecommunications' most complex problems: autonomous network optimization.

Radio Access Networks (RAN) are the backbone of mobile telecommunications, managing thousands of cell towers, each with thousands of configurable parameters (antenna tilt, transmission power, frequency bands, physical cell identity). These networks serve millions of users simultaneously, and any misconfiguration can cause dropped calls, poor signal quality, or network congestion. Network engineers typically spend weeks analyzing performance data, identifying issues, and carefully planning parameter changes.

Team LaunchLoop proposed to automate this entire process with a multi-agent AI system that could analyze network state, detect anomalies, generate optimization strategies, resolve conflicts between competing objectives, validate changes through digital twin simulation, and deploy to live networks—all autonomously. And they had to build the complete system, from AWS infrastructure to AI agents to frontend interface, in 48 hours.

ARANO System Requirements

  • Process coverage data from 600+ cell towers across county Cork
  • Analyze millions of grid cells with real-time performance metrics (RSRP, SINR, throughput)
  • Coordinate 5 autonomous AI agents with different specializations and decision-making styles
  • Validate all changes through digital twin simulation before deployment
  • Deliver real-time updates via WebSocket to interactive map visualization
  • Complete end-to-end optimization workflow in under 2 minutes

As Kevin wrote in his LinkedIn post before the hackathon: "The scope of what we're planning to build in 48 hours? Most people would call it impossible. We're talking about the kind of ambitious integration that teams usually spend months on, not a weekend." He wasn't exaggerating.

02ARANO Architecture: Hierarchical Agentic Orchestration

At the heart of ARANO is a sophisticated hierarchical agentic orchestration system where 5 primary AI agents coordinate with specialized sub-agents—all powered by Claude 3.5 Sonnet v2 via AWS Bedrock—to manage network optimization autonomously. Each primary agent has a distinct role, personality (controlled by temperature settings), and decision-making authority, whilst the Network and Optimization agents delegate to 6 specialized sub-agents each (Coverage, Interference, Capacity, Throughput, PCI, Layer Management) for domain-specific analysis.

Hierarchical Agent Architecture

1. Manager Agent (Temperature: 0.3)

Role: Agentic orchestration coordinator

Analyzes network state, identifies problem regions (poor coverage, high interference, capacity constraints, carrier aggregation imbalance), assigns problems to specialist agents, and applies operator intent scoring (balance, coverage-focused, interference-focused, or capacity-focused optimization strategies). Coordinates the entire multi-agent workflow.

2. Network Agent (Temperature: 0.7)

Role: Strategic long-term optimizer with specialized sub-agents

Identifies persistent network patterns and trends. Provides strategic recommendations for parameter changes with a weeks-to-months timeframe. More creative temperature setting (0.7) allows for innovative optimization strategies.

Sub-Agents: Coverage, Interference, Capacity, Throughput, PCI (Physical Cell ID), Layer (Carrier Aggregation)

3. Optimization Agent (Temperature: 0.5)

Role: Tactical short-term fixer with specialized sub-agents

Detects real-time anomalies and performance degradation. Provides immediate tactical fixes with minutes-to-hours timeframe. Monitors for critical load (90%+), high load (80%+), poor SINR, and degraded throughput ratios. Assigns urgency levels (Immediate: 95, Urgent: 90, High: 80) to prioritize fixes.

Sub-Agents: Coverage, Interference, Capacity, Throughput, PCI, Layer (same specializations as Network agent but with tactical focus)

4. Assessor Agent (Temperature: 0.2)

Role: Conflict resolver and decision validator

Receives all recommendations from Network and Optimization agents. Identifies conflicts (e.g., one agent wants to increase power, another wants to decrease it). Applies weighted scoring based on priority (30%), confidence (30%), and impact (40%). Produces unified recommendation set with safety constraints enforced (max 10% coverage change, max 5% interference increase).

5. Fixer Agent (Temperature: 0.1)

Role: Digital twin validator and deployment gatekeeper

Most deterministic agent (temperature 0.1) for precise validation. Tests every recommendation in the digital twin before allowing deployment. Simulates parameter changes, calculates impact on coverage and interference, and marks recommendations as "safe to deploy" only if confidence >70% and changes stay within safety thresholds. Uses Model Context Protocol (MCP) tools to interact with the digital twin simulation engine.

The workflow is a carefully orchestrated four-step process:

  1. Step 1:Manager Agent analyzes network state and assigns problems to specialists
  2. Step 2:Network + Optimization Agents work in parallel, generating strategic and tactical recommendations
  3. Step 3:Assessor Agent resolves conflicts and produces unified recommendation set
  4. Step 4:Fixer Agent validates each recommendation in digital twin and marks safe for deployment

This architecture isn't theoretical. It's running live at arano.io, processing real network data, and demonstrating autonomous decision-making across 600+ cell towers. The typical workflow completes in under 2 minutes, from initial network analysis to validated recommendations ready for deployment.

03The Technical Reality: Database Battles and Sleep Deprivation

Hackathons aren't just about brilliant ideas and elegant architecture. They're about execution under extreme time pressure, debugging at 2am, and making impossible decisions when critical systems fail.

Team LaunchLoop's 48-hour sprint was a masterclass in resilience and technical problem-solving. The team worked until 3am each night, caught a few hours of sleep, and was back coding at 7am. This wasn't optional—it was the only way to deliver a production-grade system in the time available.

The Saturday Database Crisis

On Saturday—the crucial middle day of the hackathon—disaster struck. The Aurora PostgreSQL database began experiencing mysterious performance issues. Queries that should have completed in milliseconds were taking seconds. The digital twin simulation ground to a halt. Agent workflows timed out.

For 16 consecutive hours, Kevin and Stephen debugged. They examined connection pooling configurations, analyzed slow query logs, investigated Aurora Serverless v2 scaling behaviour, checked for table lock contention, and monitored ACU utilization patterns. Every potential fix led to another dead end.

The cost? They lost an entire day. Phase 2 of their architecture (advanced anomaly detection algorithms) and Phase 3 (predictive network capacity planning) never got implemented. The team had to make a brutal decision: deliver a complete, working Phase 1 system, or risk everything trying to implement features they might not have time to debug.

They chose wisely. By Sunday evening, ARANO was operational. The database issues had been resolved (a combination of index optimization and Aurora auto-scaling configuration adjustments). The 5-agent workflow was executing flawlessly. The digital twin was validating recommendations correctly. And the frontend was rendering network state in real-time with WebSocket updates.

The 1 Billion Parameter Dataset Challenge

Meanwhile, Stephen Dillon was solving a different class of problem: how do you process a billion-parameter dataset in real-time?

The network coverage data representing 600+ cell towers across Ireland contained roughly 1 billion data points—measurements of signal strength (RSRP), signal quality (SINR), download/upload throughput, cell overlap counts, and interference patterns for millions of geographic grid cells. This data needed to be:

  • Loaded into memory for agent analysis
  • Queried in under 100ms for real-time decision-making
  • Indexed for spatial queries (find all cells in region X)
  • Processed by AI agents to detect coverage gaps, interference, and capacity issues

Stephen's solution was elegant: he intelligently "arranged" the dataset. By identifying redundancies, aggregating measurements where precision wasn't critical, and pre-computing derived metrics (like coverage hulls and interference zones), he compressed the operational dataset to 25 million optimized records whilst maintaining the fidelity needed for accurate network optimization.

This wasn't just data compression—it was data architecture designed specifically for agent consumption. Every field, every index, every pre-calculated metric was optimized for the questions the AI agents would ask: "Which cells are causing interference in this region?" "What's the coverage overlap for cell tower X?" "Which grid cells have SINR below threshold?"

04AWS Infrastructure at Scale

ARANO's architecture leverages a complete AWS stack, deployed and configured entirely within the hackathon timeframe:

ServiceConfigurationPurpose
AWS Bedrock RuntimeClaude 3.5 Sonnet v2
5 concurrent agents
AI inference for all agent decision-making, 5-10 API calls per workflow
RDS Aurora PostgreSQLServerless v2
0.5-4.0 ACU auto-scaling
Primary data store for 25M cell coverage records, sub-100ms query performance
EC2t3.medium instance
2 vCPU, 4 GiB RAM
Backend server: Node.js/Express, Socket.io WebSocket, MCP server, orchestrator
S310.5 GiB storage
Standard class
Cell tower data, coverage maps, alarm logs, GIS shapefiles
AWS AmplifyNext.js 15 SSR
CloudFront CDN
Frontend hosting with React 19, Mapbox visualization, real-time WebSocket client
Route53Hosted zone
A/CNAME records
DNS management for arano.io domain and api.arano.io subdomain

The architecture demonstrates production-grade engineering principles:

  • Serverless auto-scaling: Aurora Serverless v2 scales from 0.5 to 4.0 ACU based on demand, reducing costs whilst maintaining performance
  • IAM-based security: EC2 instance profile for S3 and Bedrock access, security groups restricting database access to specific IPs
  • Real-time communication: Socket.io WebSocket delivering network updates, agent logs, and recommendations to frontend with sub-second latency
  • CDN-accelerated frontend: Amplify's CloudFront integration provides global low-latency access to the interactive network visualization
  • Digital twin architecture: Pre-calculated simulations (±1°, ±2° antenna tilt scenarios) stored in database for instant impact prediction

This isn't a hackathon demo running on localhost. It's a live, publicly accessible platform at arano.io, deployed on AWS infrastructure, processing real data, and demonstrating autonomous AI decision-making. The judges recognized this immediately.

05What This Win Means for Autonomous AI Systems

The day after winning the AWS Breaking Barriers Hackathon, Kevin and Stephen were brought on stage by Jan Hofmeyr at the FYUZ conference to present ARANO to the telecommunications and AI community. The platform generated immediate interest from network operators and infrastructure providers.

But ARANO's significance extends beyond telecommunications. It demonstrates three critical principles for building autonomous AI systems:

1. Multi-Agent Specialization Works

Rather than building one monolithic AI that tries to handle all aspects of network optimization, ARANO uses specialized agents with distinct roles, decision-making styles (temperature settings), and areas of expertise. The Manager orchestrates, the Network agent thinks strategically, the Optimization agent acts tactically, the Assessor resolves conflicts, and the Fixer validates. This division of labour mirrors how human organizations structure themselves—and it works for AI agents too.

2. Digital Twins Enable Safe Autonomy

The most critical insight: autonomous systems need safe environments to test decisions before deployment. ARANO's digital twin simulates network changes and calculates impact before any parameters are modified in the live network. This is the key to trustworthy AI—agents can be creative and autonomous whilst still maintaining safety constraints through simulation-based validation.

3. Production-Grade Systems Can Be Built Rapidly

With modern cloud infrastructure (AWS), powerful foundation models (Claude 3.5 Sonnet v2 via Bedrock), and the right architectural patterns (serverless databases, WebSocket communication, digital twin simulation), a two-person team can deploy enterprise-grade AI systems in 48 hours. The barrier to building autonomous AI isn't technical complexity anymore—it's having the right architecture and the experience to execute under pressure.

Team LaunchLoop's winning streak—National AI Challenge in September, AWS Breaking Barriers Hackathon in November—isn't luck. It's pattern recognition applied to hackathon success: understand what judges value (production-ready systems, real-world impact, technical sophistication), execute flawlessly under time pressure, and tell a compelling story about autonomous AI.

As Kevin teaches in his "How to Win a Hackathon" workshop: "The barrier to building isn't technical skill anymore. It's having an idea and knowing how to execute it." ARANO proves that autonomous AI systems are no longer science fiction—they're buildable, deployable, and winning hackathons.

06Frequently Asked Questions

What is ARANO and what does it do?
ARANO (Autonomous Radio Access Network Optimization) is a multi-agent AI system that autonomously manages and optimizes telecommunications networks. It uses 5 specialized AI agents powered by AWS Bedrock (Claude 3.5 Sonnet v2) working collaboratively to analyze network performance, detect anomalies, resolve conflicts, and validate changes through digital twin simulation before deployment to live networks. The platform processes data from 600+ cell towers, analyzes millions of coverage grid cells, and delivers validated optimization recommendations in under 2 minutes end-to-end.
How did Team LaunchLoop build a production system in 48 hours?
Team LaunchLoop worked relentlessly throughout the hackathon, staying up until 3am each night and resuming at 7am. They leveraged AWS infrastructure (EC2, Aurora PostgreSQL Serverless v2, Bedrock, S3, Amplify) to rapidly deploy a production-grade architecture. The team faced a critical 16-hour database troubleshooting session on Saturday that prevented them from implementing Phase 2 (advanced anomaly detection) and Phase 3 (predictive capacity planning) of their planned build, but they made the strategic decision to deliver a complete, working Phase 1 system rather than risk an incomplete multi-phase implementation. The combination of cloud infrastructure, powerful foundation models, and years of experience building AI systems enabled rapid execution.
What AWS services did ARANO use?
ARANO's architecture leveraged a comprehensive AWS stack: AWS Bedrock Runtime (Claude 3.5 Sonnet v2 for all 5 AI agents), RDS Aurora PostgreSQL Serverless v2 (0.5-4.0 ACU auto-scaling database storing 25 million cell coverage records), EC2 t3.medium instance (backend server with Node.js/Express/Socket.io for real-time communication), S3 (10.5 GiB cell tower data storage including coverage maps and GIS shapefiles), AWS Amplify (Next.js 15 frontend hosting with CloudFront CDN for global low-latency access), and Route53 (DNS management for arano.io domain). The entire stack was deployed and configured during the 48-hour hackathon.
What made the 1 billion parameter dataset technically challenging?
Stephen Dillon built a 1 billion parameter dataset representing real-world network coverage data from 600+ cell towers across Ireland. Each data point included signal strength (RSRP), signal quality (SINR), download/upload throughput, cell overlap counts, and interference patterns for millions of geographic grid cells. The challenge was processing this massive dataset in real-time for agent decision-making whilst maintaining sub-100ms query performance. Stephen solved this by intelligently "arranging" the data—identifying redundancies, aggregating measurements where precision wasn't critical, and pre-computing derived metrics like coverage hulls and interference zones. This compressed the operational dataset to 25 million optimized records whilst maintaining the fidelity needed for accurate network optimization. The architecture was designed specifically for agent consumption, with every field, index, and pre-calculated metric optimized for the questions AI agents would ask during network analysis.
How does ARANO's 5-agent system work together?
ARANO uses a hierarchical four-step workflow: (1) Manager Agent (temp 0.3) orchestrates the workflow, analyzes network state, identifies problem regions, and assigns problems to specialists. (2) Network Agent (temp 0.7) and Optimization Agent (temp 0.5) work in parallel—Network provides strategic long-term optimization recommendations whilst Optimization delivers tactical short-term fixes and anomaly detection. (3) Assessor Agent (temp 0.2) receives all recommendations from both agents, identifies conflicts, and applies weighted scoring (priority 30%, confidence 30%, impact 40%) to produce a unified recommendation set whilst enforcing safety constraints. (4) Fixer Agent (temp 0.1) validates each recommendation through digital twin simulation, calculates impact on coverage and interference, and marks recommendations as "safe to deploy" only if confidence exceeds 70% and changes stay within safety thresholds. The entire workflow completes in under 2 minutes, with 5-10 Claude API calls per optimization cycle.
Is this Team LaunchLoop's first hackathon win?
No, this is Team LaunchLoop's second major hackathon victory in under two months. In September 2025, they won first place at Ireland's National AI Challenge (competing against 540+ teams) with GradGenie, an AI exam grading system that processes Leaving Cert exams in 1 week instead of 26 days, saving Irish taxpayers €48 million annually. The AWS Breaking Barriers Hackathon win in November 2025 continues their winning streak, demonstrating consistent excellence in rapid AI system development, multi-agent architecture, and production-grade engineering under extreme time pressure. Kevin Collins also teaches a "How to Win a Hackathon" workshop, with multiple attendees going on to win their own competitions.
What happened at the FYUZ conference after the hackathon?
The day after winning the AWS Breaking Barriers Hackathon, Team LaunchLoop was brought on stage by Jan Hofmeyr at the FYUZ conference to present ARANO to the telecommunications and AI community. This provided immediate exposure for their autonomous network optimization platform to industry leaders, network operators, infrastructure providers, and potential customers in the RAN optimization space. The platform, which is live at arano.io, generated significant interest from attendees looking to implement autonomous AI systems for network management.

Building the Future of Autonomous AI

Team LaunchLoop's victory at the AWS Breaking Barriers Hackathon demonstrates that autonomous AI systems—agents that perceive, reason, plan, and act independently—are no longer theoretical. They're buildable, deployable, and delivering real-world value.

ARANO proves three critical insights: multi-agent specialization enables sophisticated decision-making, digital twin simulation makes AI autonomy safe and trustworthy, and modern cloud infrastructure (AWS Bedrock, Aurora Serverless, Amplify) allows rapid deployment of production-grade systems.

At Echofold, we're building this future every day. Whether you're optimizing telecommunications networks, automating business processes, or implementing AI-driven decision-making, the principles demonstrated by ARANO apply universally: specialized agents, safe validation through simulation, and production-grade infrastructure.

Want to build autonomous AI systems for your business? Explore Fractal AI to see how private, EU-based AI can transform your operations.

Stay Updated on AI Innovation

Get exclusive insights on autonomous AI systems, multi-agent architectures, and real-world case studies delivered to your inbox. Join hundreds of AI practitioners and business leaders building the future of intelligent automation.

Additional Resources

  • ARANO Platform (Live Demo)
  • AWS Breaking Barriers Hackathon Event Page
  • FYUZ Conference
  • How to Win a Hackathon Workshop by Kevin Collins
  • LaunchLoop Wins National AI Challenge 2025 with GradGenie
  • Join LaunchLoop Community
  • More Echofold News & Updates