Compare commits
1 Commits
| Author | SHA1 | Date | |
|---|---|---|---|
| 86900b0bd4 |
3
.gitignore
vendored
3
.gitignore
vendored
@ -1,2 +1 @@
|
||||
dist
|
||||
node_modules
|
||||
server
|
||||
540
CLAUDE.md
540
CLAUDE.md
@ -4,36 +4,18 @@ This file provides guidance to Claude Code (claude.ai/code) when working with co
|
||||
|
||||
## Project Overview
|
||||
|
||||
Skybridge is an all-in-one "startup starterpack" monolith application designed to provide everything needed to start a software company with a single platform. Built with a microfrontend architecture, it combines multiple business-critical services including API key management, user authentication, permissions, and a modular plugin system for additional business applications.
|
||||
This is an API Key Management Service (KMS) built with Go backend and React TypeScript frontend. The system manages API keys, user authentication, permissions, and provides both static tokens and user JWT tokens with hierarchical permission scopes.
|
||||
|
||||
**Key Technologies:**
|
||||
- **Backend**: Go 1.23+ with Gin router, PostgreSQL, JWT tokens
|
||||
- **Frontend**: React 18+ with TypeScript, Mantine UI components
|
||||
- **Module Federation**: Webpack 5 Module Federation for plugin architecture
|
||||
- **Infrastructure**: Podman/Docker Compose, Nginx
|
||||
- **Backend**: Go 1.23+ with Gin/Gorilla Mux, PostgreSQL, JWT tokens
|
||||
- **Frontend**: React 19+ with TypeScript, Ant Design 5.27+
|
||||
- **Infrastructure**: Podman/Docker Compose, Nginx, Redis (optional)
|
||||
- **Security**: HMAC token signing, RBAC permissions, rate limiting
|
||||
|
||||
## Startup Platform Architecture
|
||||
## Architecture
|
||||
|
||||
### Business Applications (Microfrontends)
|
||||
```
|
||||
skybridge/
|
||||
├── kms/ - API Key Management System
|
||||
│ ├── [Go backend files] - Secure token lifecycle management
|
||||
│ └── web/ - KMS frontend (port 3002)
|
||||
├── web/ - Main Shell Dashboard (port 3000)
|
||||
├── demo/ - Plugin Development Template (port 3001)
|
||||
├── kms-frontend/ - Legacy KMS interface
|
||||
└── [future modules] - Additional business applications
|
||||
```
|
||||
The project follows clean architecture principles with clear separation:
|
||||
|
||||
### Plugin Architecture (Module Federation)
|
||||
- **Shell Dashboard (port 3000)**: Central hub for all business applications
|
||||
- **KMS Module (port 3002)**: API key and authentication management
|
||||
- **Demo Module (port 3001)**: Template for building new business modules
|
||||
- **Extensible Design**: Easy addition of new business applications (CRM, billing, analytics, etc.)
|
||||
|
||||
### Backend Architecture (KMS)
|
||||
```
|
||||
cmd/server/ - Application entry point
|
||||
internal/ - Go backend core logic
|
||||
@ -41,190 +23,438 @@ internal/ - Go backend core logic
|
||||
├── repository/ - Data access interfaces and PostgreSQL implementations
|
||||
├── services/ - Business logic layer
|
||||
├── handlers/ - HTTP request handlers (Gin-based)
|
||||
├── middleware/ - Authentication, logging, security middleware
|
||||
├── auth/ - Multiple auth providers (header, JWT, OAuth2, SAML)
|
||||
├── middleware/ - Authentication, logging, security, CSRF middleware
|
||||
├── config/ - Configuration management with validation
|
||||
├── auth/ - JWT, OAuth2, SAML, header-based auth providers
|
||||
├── cache/ - Redis caching layer (optional)
|
||||
├── metrics/ - Prometheus metrics collection
|
||||
└── database/ - Database connection and migrations
|
||||
kms-frontend/ - React TypeScript frontend with Ant Design
|
||||
migrations/ - PostgreSQL database migration files
|
||||
test/ - Integration and E2E tests (both Go and bash)
|
||||
docs/ - Comprehensive technical documentation
|
||||
nginx/ - Nginx configuration for reverse proxy
|
||||
```
|
||||
|
||||
## Development Commands
|
||||
|
||||
### Startup Platform Development
|
||||
### Go Backend
|
||||
|
||||
```bash
|
||||
# Start complete platform in development mode (run in separate terminals)
|
||||
|
||||
# Terminal 1: Main Dashboard Shell
|
||||
cd web
|
||||
npm install
|
||||
npm run dev # Central dashboard on port 3000
|
||||
|
||||
# Terminal 2: KMS Business Module
|
||||
cd kms/web
|
||||
npm install
|
||||
npm run dev # API Key Management on port 3002
|
||||
|
||||
# Terminal 3: Demo/Template Module
|
||||
cd demo
|
||||
npm install
|
||||
npm run dev # Plugin template on port 3001
|
||||
|
||||
# Production build
|
||||
npm run build # In each business module directory
|
||||
```
|
||||
|
||||
### Platform Backend Development
|
||||
|
||||
```bash
|
||||
# Start core platform backend with authentication services
|
||||
cd kms
|
||||
# Run the server locally (requires environment variables)
|
||||
INTERNAL_HMAC_KEY=test-hmac-key JWT_SECRET=test-jwt-secret AUTH_SIGNING_KEY=test-signing-key go run cmd/server/main.go
|
||||
|
||||
# Build platform backend
|
||||
go build -o platform-server ./cmd/server
|
||||
# Build the binary
|
||||
go build -o api-key-service ./cmd/server
|
||||
|
||||
# Run platform tests
|
||||
# Run tests (uses kms_test database)
|
||||
go test -v ./test/...
|
||||
|
||||
# Test specific business modules
|
||||
# Run tests with coverage
|
||||
go test -v -coverprofile=coverage.out ./test/...
|
||||
go tool cover -html=coverage.out -o coverage.html
|
||||
|
||||
# Run specific test suites
|
||||
go test -v ./test/ -run TestHealthEndpoints
|
||||
go test -v ./test/ -run TestApplicationCRUD
|
||||
go test -v ./test/ -run TestStaticTokenWorkflow
|
||||
go test -v ./test/ -run TestConcurrentRequests
|
||||
```
|
||||
|
||||
### Full Platform Deployment (Recommended)
|
||||
### React Frontend
|
||||
|
||||
```bash
|
||||
# Navigate to frontend directory
|
||||
cd kms-frontend
|
||||
|
||||
# Install dependencies (Node 24+, npm 11+)
|
||||
npm install
|
||||
|
||||
# Start development server
|
||||
npm start
|
||||
|
||||
# Build for production
|
||||
npm run build
|
||||
|
||||
# Run tests
|
||||
npm test
|
||||
```
|
||||
|
||||
### Podman Compose & Development Environment
|
||||
|
||||
**CRITICAL**: This project uses `podman-compose`, not `docker-compose`.
|
||||
|
||||
```bash
|
||||
# Start complete startup platform (database, API, all business modules)
|
||||
# Start all services (PostgreSQL, API, Nginx, Frontend)
|
||||
podman-compose up -d
|
||||
|
||||
# Start with forced rebuild after code changes
|
||||
podman-compose up -d --build
|
||||
# Start with SSO testing enabled (Keycloak + SAML IdP)
|
||||
podman-compose -f docker-compose.yml -f docker-compose.sso.yml up -d
|
||||
|
||||
# View platform logs
|
||||
podman-compose logs -f kms-api-service
|
||||
# Check service health
|
||||
curl http://localhost:8081/health
|
||||
|
||||
# View logs
|
||||
podman-compose logs -f
|
||||
|
||||
# View specific service logs
|
||||
podman-compose logs -f api-service
|
||||
podman-compose logs -f postgres
|
||||
podman-compose logs -f keycloak
|
||||
podman-compose logs -f saml-idp
|
||||
|
||||
# Stop entire platform
|
||||
# Stop services
|
||||
podman-compose down
|
||||
|
||||
# Check platform health
|
||||
curl http://localhost:8081/health
|
||||
# Stop SSO services
|
||||
podman-compose -f docker-compose.yml -f docker-compose.sso.yml down
|
||||
|
||||
# Rebuild services after code changes
|
||||
podman-compose up -d --build
|
||||
```
|
||||
|
||||
### Database Operations
|
||||
## Database Operations
|
||||
|
||||
**CRITICAL**: All database operations use `podman exec` commands.
|
||||
**CRITICAL**: All database operations use `podman exec` commands. Never use direct `psql` commands.
|
||||
|
||||
### Database Access
|
||||
|
||||
```bash
|
||||
# Access database shell
|
||||
# Access database shell (container name: kms-postgres)
|
||||
podman exec -it kms-postgres psql -U postgres -d kms
|
||||
|
||||
# Run SQL commands via exec
|
||||
podman exec -it kms-postgres psql -U postgres -c "SELECT * FROM applications LIMIT 5;"
|
||||
|
||||
# Check tables
|
||||
# Check specific tables
|
||||
podman exec -it kms-postgres psql -U postgres -d kms -c "\dt"
|
||||
podman exec -it kms-postgres psql -U postgres -d kms -c "SELECT token_id, app_id, user_id FROM static_tokens LIMIT 5;"
|
||||
|
||||
# Reset test database
|
||||
podman exec -it kms-postgres psql -U postgres -c "DROP DATABASE IF EXISTS kms_test; CREATE DATABASE kms_test;"
|
||||
# Apply migrations manually if needed
|
||||
podman exec -it kms-postgres psql -U postgres -d kms -f /docker-entrypoint-initdb.d/001_initial_schema.up.sql
|
||||
```
|
||||
|
||||
## Key Architecture Patterns
|
||||
### Database Testing
|
||||
|
||||
### Business Module Plugin System
|
||||
- **Shell Dashboard** (`web/webpack.config.js`): Central hub consuming all business modules
|
||||
- **Business Modules** (`kms/web/webpack.config.js`, `demo/webpack.config.js`): Independent applications exposing functionality
|
||||
- **Shared Infrastructure**: React, Mantine, icons shared across all business modules
|
||||
|
||||
### Startup Platform Routing
|
||||
- **Central routing**: `/app/{businessModule}/*` handled by shell dashboard
|
||||
- **Module autonomy**: Each business application handles internal navigation independently
|
||||
- **Isolated contexts**: No React Router conflicts between business modules
|
||||
|
||||
### Core Authentication System (KMS Module)
|
||||
The platform's authentication system uses **exact permission names** (not wildcards):
|
||||
- **Application management**: `app.read`, `app.write`, `app.delete`
|
||||
- **Token operations**: `token.read`, `token.create`, `token.revoke`
|
||||
- **Repository access**: `repo.read`, `repo.write`, `repo.admin`
|
||||
- **Permission management**: `permission.read`, `permission.write`, `permission.grant`, `permission.revoke`
|
||||
|
||||
### Business Application Types
|
||||
Valid application configurations for the platform:
|
||||
- `"static"` - Service-to-service authentication for business modules
|
||||
- `"user"` - User authentication for startup team members
|
||||
|
||||
### Ownership Models
|
||||
Platform supports different ownership structures:
|
||||
- `"individual"` - Individual founder/employee ownership
|
||||
- `"team"` - Team/department ownership
|
||||
|
||||
## Important Development Notes
|
||||
|
||||
### Business Module Development
|
||||
- **Port allocation**: Dashboard:3000, Demo:3001, KMS:3002, [Future modules:3003+]
|
||||
- **Environment variables**: All business modules require `webpack.DefinePlugin` for process.env
|
||||
- **Shared dependencies**: Must match versions across all business modules
|
||||
- **Navigation**: Use `window.history.pushState()` and custom events for inter-module routing
|
||||
- **Local Development**: Do not start webpack microfrontends as they are already running locally
|
||||
|
||||
### Platform API Integration
|
||||
- **Base URL**: `http://localhost:8080` (development)
|
||||
- **Authentication**: Header-based with `X-User-Email: admin@example.com`
|
||||
- **Duration format**: Convert human-readable formats like "24h" to seconds (86400) for API
|
||||
- **Permission validation**: Use exact permission names from database, not wildcards
|
||||
|
||||
### Startup Platform UI Standards
|
||||
- **UI Framework**: Mantine v7.0.0 (consistent across all business modules)
|
||||
- **Icons**: Tabler Icons React (shared icon system)
|
||||
- **Forms**: Mantine Form with validation (standardized form handling)
|
||||
- **Notifications**: Mantine Notifications (unified notification system)
|
||||
|
||||
### Critical Platform Configuration
|
||||
```bash
|
||||
# Platform Backend Environment Variables (Required)
|
||||
# Create test database (if needed)
|
||||
podman exec -it kms-postgres psql -U postgres -c "CREATE DATABASE kms_test;"
|
||||
|
||||
# Reset test database
|
||||
podman exec -it kms-postgres psql -U postgres -c "DROP DATABASE IF EXISTS kms_test; CREATE DATABASE kms_test;"
|
||||
|
||||
# Check test data
|
||||
podman exec -it kms-postgres psql -U postgres -d kms -c "SELECT * FROM applications WHERE name LIKE 'test-%';"
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
The project uses podman-compose for all testing environments and database operations.
|
||||
|
||||
### End-to-End Testing
|
||||
|
||||
```bash
|
||||
# Start test environment with podman-compose, guaranteeing that it updates with --build
|
||||
podman-compose up -d --build
|
||||
|
||||
# Wait for services to be ready
|
||||
sleep 10
|
||||
|
||||
# Run comprehensive E2E tests with curl
|
||||
./test/e2e_test.sh
|
||||
|
||||
# Test against specific server and user
|
||||
BASE_URL=http://localhost:8080 USER_EMAIL=admin@example.com ./test/e2e_test.sh
|
||||
|
||||
# Clean up test environment
|
||||
podman-compose down
|
||||
```
|
||||
|
||||
### Go Integration Tests
|
||||
|
||||
```bash
|
||||
# Run Go integration tests (uses kms_test database)
|
||||
go test -v ./test/...
|
||||
|
||||
# With podman-compose environment
|
||||
podman-compose up -d
|
||||
sleep 10
|
||||
go test -v ./test/...
|
||||
podman-compose down
|
||||
```
|
||||
|
||||
### Test Environments & Ports
|
||||
|
||||
- **Port 8080**: Main API service
|
||||
- **Port 8081**: Nginx proxy (main access point)
|
||||
- **Port 3000**: React frontend (direct access)
|
||||
- **Port 5432**: PostgreSQL database
|
||||
- **Port 9090**: Metrics endpoint (if enabled)
|
||||
- **Port 8090**: Keycloak SSO server (admin console)
|
||||
- **Port 8091**: SimpleSAMLphp IdP (SAML console: /simplesaml)
|
||||
- **Port 8443**: SimpleSAMLphp IdP (HTTPS)
|
||||
|
||||
The service provides different test user contexts:
|
||||
- Regular user: `test@example.com`
|
||||
- Admin user: `admin@example.com`
|
||||
- Limited user: `limited@example.com`
|
||||
|
||||
### SSO Testing Users
|
||||
|
||||
For SSO testing with Keycloak or SAML IdP, use these credentials:
|
||||
|
||||
| Email | Password | Permissions | Provider |
|
||||
|-------|----------|-------------|----------|
|
||||
| admin@example.com | admin123 | internal.* | Keycloak |
|
||||
| test@example.com | test123 | app.read, token.read | Keycloak |
|
||||
| limited@example.com | limited123 | repo.read | Keycloak |
|
||||
| user1@example.com | user1pass | Basic access | SAML IdP |
|
||||
| user2@example.com | user2pass | Basic access | SAML IdP |
|
||||
|
||||
### SSO Access Points
|
||||
|
||||
- **Keycloak Admin Console**: http://localhost:8090 (admin / admin)
|
||||
- **SAML IdP Admin Console**: http://localhost:8091/simplesaml (admin / secret)
|
||||
- **Keycloak Realm**: http://localhost:8090/realms/kms
|
||||
- **SAML IdP Metadata**: http://localhost:8091/simplesaml/saml2/idp/metadata.php
|
||||
|
||||
## Key Configuration
|
||||
|
||||
### Required Environment Variables
|
||||
|
||||
```bash
|
||||
# Security (REQUIRED - minimum 32 characters each)
|
||||
INTERNAL_HMAC_KEY=<secure-hmac-key-32-chars-min>
|
||||
JWT_SECRET=<secure-jwt-secret-32-chars-min>
|
||||
JWT_SECRET=<secure-jwt-secret-32-chars-min>
|
||||
AUTH_SIGNING_KEY=<secure-auth-key-32-chars-min>
|
||||
|
||||
# Platform Database
|
||||
DB_HOST=postgres # Use 'postgres' for containers
|
||||
DB_PORT=5432
|
||||
# Database
|
||||
DB_HOST=postgres # Use 'postgres' for containers, 'localhost' for local
|
||||
DB_PORT=5432
|
||||
DB_NAME=kms
|
||||
DB_USER=postgres
|
||||
DB_USER=postgres
|
||||
DB_PASSWORD=postgres
|
||||
DB_SSLMODE=disable
|
||||
|
||||
# Startup Platform Authentication
|
||||
AUTH_PROVIDER=header
|
||||
# Server
|
||||
SERVER_HOST=0.0.0.0
|
||||
SERVER_PORT=8080
|
||||
|
||||
# Authentication
|
||||
AUTH_PROVIDER=header # 'header', 'sso', or 'saml'
|
||||
AUTH_HEADER_USER_EMAIL=X-User-Email
|
||||
|
||||
# SSO / OAuth2 Configuration (for Keycloak)
|
||||
OAUTH2_ENABLED=false # Set to true for OAuth2/OIDC auth
|
||||
OAUTH2_PROVIDER_URL=http://keycloak:8080/realms/kms
|
||||
OAUTH2_CLIENT_ID=kms-api
|
||||
OAUTH2_CLIENT_SECRET=kms-client-secret
|
||||
OAUTH2_REDIRECT_URL=http://localhost:8081/api/oauth2/callback
|
||||
|
||||
# SAML Configuration (for SimpleSAMLphp)
|
||||
SAML_ENABLED=false # Set to true for SAML auth
|
||||
SAML_IDP_SSO_URL=http://saml-idp:8080/simplesaml/saml2/idp/SSOService.php
|
||||
SAML_IDP_METADATA_URL=http://saml-idp:8080/simplesaml/saml2/idp/metadata.php
|
||||
SAML_SP_ENTITY_ID=http://localhost:8081
|
||||
SAML_SP_ACS_URL=http://localhost:8081/api/saml/acs
|
||||
SAML_SP_SLS_URL=http://localhost:8081/api/saml/sls
|
||||
|
||||
# Features
|
||||
RATE_LIMIT_ENABLED=true
|
||||
CACHE_ENABLED=false # Set to true to enable Redis
|
||||
METRICS_ENABLED=true
|
||||
```
|
||||
|
||||
### Testing & Debugging
|
||||
- **Platform E2E Tests**: `./kms/test/e2e_test.sh` for core authentication system
|
||||
- **Test Users**: `admin@example.com`, `test@example.com`, `limited@example.com`
|
||||
- **Service Ports**: API:8080, Nginx:8081, Dashboard:3000, DB:5432
|
||||
- **Debug Business Modules**: Check browser network tab for remoteEntry.js loading from each module
|
||||
### Optional Configuration
|
||||
|
||||
### Build & Deployment
|
||||
- **Development**: Run dashboard shell + all business modules + platform backend
|
||||
- **Container Issues**: Use `podman-compose build --no-cache` if changes don't appear
|
||||
- **Module Federation**: All business modules must be running for dashboard to load them
|
||||
- **Production**: Build all business modules and serve via nginx with proper CORS headers
|
||||
```bash
|
||||
# Rate Limiting
|
||||
RATE_LIMIT_RPS=100
|
||||
RATE_LIMIT_BURST=200
|
||||
AUTH_RATE_LIMIT_RPS=5
|
||||
AUTH_RATE_LIMIT_BURST=10
|
||||
|
||||
## Security Considerations
|
||||
- All platform tokens use HMAC signing with secure keys
|
||||
- Permission validation at both API and UI levels across all business modules
|
||||
- Rate limiting and comprehensive audit logging for startup compliance
|
||||
- CORS configured for secure business module communication
|
||||
- Never commit secrets or API keys to repository
|
||||
# Caching (Redis)
|
||||
REDIS_ADDR=localhost:6379
|
||||
REDIS_DB=0
|
||||
|
||||
## Adding New Business Modules
|
||||
To extend the startup platform with new business applications (CRM, billing, analytics, etc.):
|
||||
# Security
|
||||
MAX_AUTH_FAILURES=5
|
||||
AUTH_FAILURE_WINDOW=15m
|
||||
IP_BLOCK_DURATION=1h
|
||||
|
||||
1. **Create new module directory** following the pattern of `demo/` or `kms/web/`
|
||||
2. **Configure Module Federation** in webpack.config.js to expose the module
|
||||
3. **Update shell dashboard** (`web/webpack.config.js`) to consume the new remote
|
||||
4. **Add to navigation** in `web/src/components/Navigation.tsx`
|
||||
5. **Follow UI standards** using Mantine components and shared dependencies
|
||||
6. **Implement authentication** using the platform's permission system
|
||||
7. **Test integration** ensuring the module loads properly in the dashboard
|
||||
# Logging
|
||||
LOG_LEVEL=debug # debug, info, warn, error
|
||||
LOG_FORMAT=json
|
||||
```
|
||||
|
||||
## API Structure
|
||||
|
||||
### Core Endpoints
|
||||
- **Health**: `/health`, `/ready`
|
||||
- **Authentication**: `/api/login`, `/api/verify`, `/api/renew`
|
||||
- **Applications**: `/api/applications` (CRUD operations)
|
||||
- **Tokens**: `/api/applications/{id}/tokens` (Static token management)
|
||||
- **Audit**: `/api/audit/events`, `/api/audit/events/:id`, `/api/audit/stats` (Audit log management)
|
||||
- **Metrics**: `:9090/metrics` (Prometheus format, if enabled)
|
||||
|
||||
### Permission System
|
||||
|
||||
Hierarchical permission scopes (parent permissions include child permissions):
|
||||
- `internal.*` - System operations (highest level)
|
||||
- `app.*` - Application management
|
||||
- `token.*` - Token operations
|
||||
- `repo.*` - Repository access (example domain)
|
||||
- `permission.*` - Permission management
|
||||
|
||||
Example: `repo` permission includes `repo.read` and `repo.write`.
|
||||
|
||||
## Database Schema
|
||||
|
||||
### Key Tables
|
||||
- `applications` - Application definitions with HMAC keys
|
||||
- `static_tokens` - Static API tokens with prefixes
|
||||
- `available_permissions` - Permission catalog
|
||||
- `granted_permissions` - Token-permission relationships
|
||||
- `user_sessions` - User session tracking with JWT
|
||||
- `audit_events` - Comprehensive audit logging with fields:
|
||||
- `id`, `type`, `severity`, `status`, `timestamp`
|
||||
- `actor_id`, `actor_type`, `actor_ip`, `user_agent`
|
||||
- `resource_id`, `resource_type`, `action`, `description`
|
||||
- `details` (JSON), `request_id`, `session_id`
|
||||
|
||||
### Migration System
|
||||
- Auto-runs on startup
|
||||
- Located in `/migrations/`
|
||||
- Uses `golang-migrate/migrate/v4`
|
||||
- Supports both up and down migrations
|
||||
|
||||
## Code Patterns & Architecture
|
||||
|
||||
### Backend Patterns
|
||||
- **Repository Pattern**: Data access via interfaces (`internal/repository/interfaces.go`)
|
||||
- **Dependency Injection**: Services receive dependencies via constructors
|
||||
- **Middleware Chain**: Security, auth, logging, rate limiting
|
||||
- **Structured Errors**: Custom error types with proper HTTP status codes
|
||||
- **Structured Logging**: Zap logger with JSON output
|
||||
- **Configuration Provider**: Interface-based config with validation
|
||||
- **Multiple Auth Providers**: Header, OAuth2, SAML support
|
||||
|
||||
### Frontend Patterns
|
||||
- **React 19** with TypeScript
|
||||
- **Ant Design 5.27+** component library
|
||||
- **Context API** for authentication state (`AuthContext.tsx`)
|
||||
- **Axios** for API communication with interceptors
|
||||
- **React Router 7+** for navigation
|
||||
- **Component Structure**: Organized by feature (Applications, Tokens, Users, Audit)
|
||||
- **Audit Integration**: Real-time audit log viewing with filtering, statistics, and timeline views
|
||||
|
||||
### Security Patterns
|
||||
- **HMAC Token Signing**: All tokens cryptographically signed
|
||||
- **JWT with Rotation**: User tokens with refresh capability
|
||||
- **Rate Limiting**: Per-endpoint and per-user limits
|
||||
- **CSRF Protection**: Token-based CSRF protection
|
||||
- **Audit Logging**: All operations logged with user attribution
|
||||
- **Input Validation**: Comprehensive validation at all layers
|
||||
|
||||
### Audit System Architecture
|
||||
- **Handler**: `internal/handlers/audit.go` - HTTP endpoints for audit data
|
||||
- **Logger**: `internal/audit/audit.go` - Core audit logging functionality
|
||||
- **Repository**: `internal/repository/postgres/audit_repository.go` - Data persistence
|
||||
- **Frontend**: `kms-frontend/src/components/Audit.tsx` - Real-time audit viewing
|
||||
- **API Service**: `kms-frontend/src/services/apiService.ts` - Frontend-backend integration
|
||||
- **Event Types**: Hierarchical (e.g., `auth.login`, `app.created`, `token.validated`)
|
||||
- **Filtering**: Support for date ranges, event types, statuses, users, resource types
|
||||
- **Statistics**: Aggregated metrics by type, severity, status, and time
|
||||
|
||||
## SSO Testing Workflow
|
||||
|
||||
### Quick Start - OAuth2/OIDC Testing (Keycloak)
|
||||
|
||||
```bash
|
||||
# 1. Start services with SSO enabled
|
||||
podman-compose -f docker-compose.yml -f docker-compose.sso.yml up -d
|
||||
|
||||
# 2. Wait for Keycloak to start (check logs)
|
||||
podman-compose logs -f keycloak
|
||||
|
||||
# 3. Test OAuth2 login flow
|
||||
curl -v "http://localhost:8090/realms/kms/protocol/openid-connect/auth?client_id=kms-api&response_type=code&redirect_uri=http://localhost:8081/api/oauth2/callback"
|
||||
|
||||
# 4. Access Keycloak admin console
|
||||
open http://localhost:8090
|
||||
# Login with: admin / admin
|
||||
|
||||
# 5. Test API with OAuth2 token
|
||||
# (Use Keycloak to get access token, then use in Authorization: Bearer header)
|
||||
```
|
||||
|
||||
### Quick Start - SAML Testing (SimpleSAMLphp)
|
||||
|
||||
```bash
|
||||
# 1. Services should already be running from previous step
|
||||
|
||||
# 2. Access SAML IdP admin console
|
||||
open http://localhost:8091/simplesaml
|
||||
# Login with: admin / secret
|
||||
|
||||
# 3. View IdP metadata
|
||||
curl http://localhost:8091/simplesaml/saml2/idp/metadata.php
|
||||
|
||||
# 4. Test SAML authentication flow
|
||||
# Navigate to your app and it should redirect to SAML IdP for auth
|
||||
```
|
||||
|
||||
### Environment Switching
|
||||
|
||||
```bash
|
||||
# Switch to OAuth2 mode
|
||||
podman exec kms-api-service sh -c "export AUTH_PROVIDER=sso OAUTH2_ENABLED=true && supervisorctl restart all"
|
||||
|
||||
# Switch to SAML mode
|
||||
podman exec kms-api-service sh -c "export AUTH_PROVIDER=sso SAML_ENABLED=true && supervisorctl restart all"
|
||||
|
||||
# Switch back to header mode
|
||||
podman exec kms-api-service sh -c "export AUTH_PROVIDER=header && supervisorctl restart all"
|
||||
```
|
||||
|
||||
## Development Notes
|
||||
|
||||
### Critical Information
|
||||
- **Go Version**: Requires Go 1.23+ (currently using 1.24.4)
|
||||
- **Node Version**: Requires Node 24+ and npm 11+
|
||||
- **Database**: Auto-migrations run on startup
|
||||
- **Container Names**: Use `kms-postgres`, `kms-api-service`, `kms-frontend`, `kms-nginx`, `kms-keycloak`, `kms-saml-idp`
|
||||
- **Default Ports**: API:8080, Nginx:8081, Frontend:3000, DB:5432, Metrics:9090, Keycloak:8090, SAML:8091
|
||||
- **Test Database**: `kms_test` (separate from `kms`)
|
||||
- **SSO Config**: Located in `sso-config/` directory
|
||||
|
||||
### Important Files
|
||||
- `internal/config/config.go` - Complete configuration management
|
||||
- `docker-compose.yml` - Service definitions and environment variables
|
||||
- `test/e2e_test.sh` - Comprehensive curl-based E2E tests
|
||||
- `test/README.md` - Detailed testing guide
|
||||
- `docs/` - Technical documentation (Architecture, Security, API docs)
|
||||
|
||||
### Development Workflow
|
||||
1. Always use `podman-compose` (not `docker-compose`)
|
||||
2. Database operations via `podman exec` only
|
||||
3. Required environment variables for local dev (HMAC, JWT, AUTH keys)
|
||||
4. Run tests after changes: `go test -v ./test/...`
|
||||
5. Use E2E tests to verify end-to-end functionality
|
||||
6. Frontend dev server connects to containerized backend
|
||||
|
||||
### Build & Deployment Notes
|
||||
- **Cache Issues**: When code changes don't appear, use `podman-compose build --no-cache`
|
||||
- **Route Registration**: New API routes require full rebuild to appear in Gin debug logs
|
||||
- **Error Handlers**: Use `HandleInternalError`, `HandleValidationError`, `HandleAuthenticationError`
|
||||
- **API Integration**: Frontend components should use real API calls, not mock data
|
||||
- **Field Mapping**: Ensure frontend matches backend field names (e.g., `actor_id` vs `user_id`)
|
||||
|
||||
### Security Considerations
|
||||
- Never commit secrets to repository
|
||||
- All tokens use HMAC signing with secure keys
|
||||
- Rate limiting prevents abuse
|
||||
- Comprehensive audit logging for compliance
|
||||
- Input validation at all layers
|
||||
- CORS and security headers properly configured
|
||||
234
REFACTOR.md
234
REFACTOR.md
@ -1,234 +0,0 @@
|
||||
# Skybridge Web Components Integration Report
|
||||
|
||||
## Overview
|
||||
|
||||
This report documents the successful integration of the `@skybridge/web-components` shared component library across all microfrontends in the Skybridge platform. The integration standardizes form handling, data tables, and UI components across the entire platform.
|
||||
|
||||
## Completed Work
|
||||
|
||||
### ✅ Web Components Library (`web-components/`)
|
||||
- **Status**: Fully built and ready for consumption
|
||||
- **Exports**: FormSidebar, DataTable, StatusBadge, EmptyState, LoadingState, and utility functions
|
||||
- **Build**: Successfully compiled to `dist/` with rollup configuration
|
||||
- **Package**: Available as `@skybridge/web-components` workspace dependency
|
||||
|
||||
### ✅ User Management (`user/web/`)
|
||||
- **Status**: Fully integrated and building successfully
|
||||
- **Components Refactored**:
|
||||
- `UserSidebar.tsx`: ~250 lines → ~80 lines using `FormSidebar`
|
||||
- `UserManagement.tsx`: Complex table implementation → clean `DataTable` configuration
|
||||
- **Benefits**: 70% code reduction, standardized form validation, consistent UI patterns
|
||||
- **Build Status**: ✅ Successful with only asset size warnings (expected)
|
||||
|
||||
### ✅ KMS (Key Management System) (`kms/web/`)
|
||||
- **Status**: Fully integrated and building successfully
|
||||
- **Components Refactored**:
|
||||
- `ApplicationSidebar.tsx`: Custom form implementation → `FormSidebar` with declarative fields
|
||||
- `Applications.tsx`: Custom table with manual CRUD → `DataTable` with built-in actions
|
||||
- **Benefits**: Simplified form validation, consistent CRUD operations, reduced boilerplate
|
||||
- **Build Status**: ✅ Successful with only asset size warnings (expected)
|
||||
|
||||
### ✅ Demo Application (`demo/`)
|
||||
- **Status**: Enhanced with web components showcase
|
||||
- **Components Added**:
|
||||
- Interactive `DataTable` demonstration with sample user data
|
||||
- `FormSidebar` integration for creating/editing demo entries
|
||||
- Live examples of shared component functionality
|
||||
- **Purpose**: Template for new microfrontend development and component demonstration
|
||||
- **Build Status**: ✅ Successful
|
||||
|
||||
### ✅ FaaS (Functions-as-a-Service) (`faas/web/`)
|
||||
- **Status**: Partially integrated with custom Monaco editor preserved
|
||||
- **Components Refactored**:
|
||||
- `FunctionList.tsx`: Custom table → `DataTable` with function-specific actions
|
||||
- `FunctionSidebar.tsx`: Hybrid approach using `FormSidebar` + Monaco editor
|
||||
- **Approach**: Embedded shared components while preserving specialized functionality (code editor)
|
||||
- **Build Status**: ✅ Successful
|
||||
|
||||
### ✅ Shell Dashboard (`web/`)
|
||||
- **Status**: Minimal integration (layout-focused microfrontend)
|
||||
- **Components Updated**:
|
||||
- `HomePage.tsx`: Updated to import `Badge` from shared library
|
||||
- **Rationale**: Shell app is primarily navigation/layout, minimal form/table needs
|
||||
- **Build Status**: ✅ Successful
|
||||
|
||||
## Architecture Benefits Achieved
|
||||
|
||||
### 🎯 Code Standardization
|
||||
- **Consistent Form Patterns**: All forms now use declarative field configuration
|
||||
- **Unified Table Interface**: Standardized search, pagination, CRUD operations
|
||||
- **Shared Validation**: Common validation rules across all microfrontends
|
||||
|
||||
### 📉 Code Reduction
|
||||
- **UserSidebar**: 250 lines → 80 lines (70% reduction)
|
||||
- **Applications**: Complex table implementation → clean configuration
|
||||
- **Overall**: Estimated 60-70% reduction in form/table boilerplate across platform
|
||||
|
||||
### 🔧 Maintainability Improvements
|
||||
- **Single Source of Truth**: Component logic centralized in `web-components`
|
||||
- **Consistent Updates**: Bug fixes and features propagate to all microfrontends
|
||||
- **Type Safety**: Shared TypeScript interfaces ensure consistency
|
||||
|
||||
### 🚀 Developer Experience
|
||||
- **Faster Development**: New forms/tables can be built with configuration vs custom code
|
||||
- **Consistent UX**: Users experience uniform behavior across all applications
|
||||
- **Easy Onboarding**: New developers learn one component system
|
||||
|
||||
## Technical Implementation
|
||||
|
||||
### Package Management
|
||||
```json
|
||||
{
|
||||
"dependencies": {
|
||||
"@skybridge/web-components": "workspace:*",
|
||||
"@mantine/modals": "^7.0.0" // Added where needed
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Typical Integration Pattern
|
||||
```tsx
|
||||
// Before: 200+ lines of custom form code
|
||||
// After: Clean configuration
|
||||
const fields: FormField[] = [
|
||||
{ name: 'email', type: 'email', required: true },
|
||||
{ name: 'role', type: 'select', options: [...] }
|
||||
];
|
||||
|
||||
return (
|
||||
<FormSidebar
|
||||
fields={fields}
|
||||
onSubmit={handleSubmit}
|
||||
editItem={editItem}
|
||||
/>
|
||||
);
|
||||
```
|
||||
|
||||
## Current Status
|
||||
|
||||
### ✅ Completed Tasks
|
||||
1. ✅ Web components library built and distributed
|
||||
2. ✅ All 5 microfrontends successfully integrated
|
||||
3. ✅ FormSidebar components refactored across platform
|
||||
4. ✅ DataTable components implemented with consistent APIs
|
||||
5. ✅ All builds passing with shared dependencies
|
||||
|
||||
### Build Verification
|
||||
All microfrontends build successfully with only expected asset size warnings:
|
||||
- ✅ `user/web`: 6.6s build time
|
||||
- ✅ `kms/web`: 6.8s build time
|
||||
- ✅ `demo`: 6.4s build time
|
||||
- ✅ `faas/web`: 6.7s build time
|
||||
- ✅ `web`: 12.3s build time
|
||||
|
||||
## Additional Shared Component Integration (Final Phase)
|
||||
|
||||
### ✅ SidebarLayout Standardization - **NEW**
|
||||
- **Status**: Completed across all microfrontends with sidebars
|
||||
- **Problem Solved**: Fixed sidebar overlay behavior where main content was covered instead of shrinking
|
||||
- **New Components Added**:
|
||||
- `SidebarLayout`: Manages main content area resizing when sidebars open
|
||||
- `Sidebar` (enhanced): Added `layoutMode` prop for integration with SidebarLayout
|
||||
- **Components Updated**:
|
||||
- `Applications.tsx` (KMS): Replaced Stack with manual margins → SidebarLayout + enhanced Sidebar
|
||||
- `UserManagement.tsx` (User): Replaced Stack with manual margins → SidebarLayout
|
||||
- `App.tsx` (FaaS): Replaced Stack with manual margins → Optimized margin calculation for existing fixed sidebars
|
||||
- **Benefits**: Main content now properly shrinks to accommodate sidebars, providing better UX and preventing content from being hidden
|
||||
|
||||
## Additional Shared Component Integration (Previous Phase)
|
||||
|
||||
### ✅ StatusBadge Integration
|
||||
- **Status**: Completed across FaaS and KMS modules
|
||||
- **Components Updated**:
|
||||
- `ExecutionModal.tsx` & `ExecutionSidebar.tsx` (FaaS): Replaced duplicate `getStatusColor` function with `ExecutionStatusBadge`
|
||||
- `Audit.tsx` (KMS): Replaced custom status logic with standardized `StatusBadge` component
|
||||
- **Benefits**: Eliminated duplicate status color mapping logic, consistent status display across platform
|
||||
- **Code Reduction**: ~30 lines of duplicate status logic removed per component
|
||||
|
||||
### ✅ Pattern Consolidation Analysis
|
||||
- **Duplicate Status Logic**: Found and replaced in 4+ components across microfrontends
|
||||
- **Shared Loading States**: Available but not universally adopted (complex implementation differences)
|
||||
- **Empty State Patterns**: Standardized components available for future adoption
|
||||
- **Form Sidebar Usage**: Already well-adopted in User and Application management
|
||||
|
||||
## Updated Architecture Benefits
|
||||
|
||||
### 🎯 Enhanced Code Standardization
|
||||
- **Consistent Status Indicators**: All status badges now use standardized color mapping
|
||||
- **Unified Badge Variants**: Execution, severity, runtime, and application type badges standardized
|
||||
- **Cross-Platform Consistency**: Status colors consistent between KMS audit logs and FaaS execution displays
|
||||
|
||||
### 📉 Final Code Reduction Metrics
|
||||
- **UserSidebar**: 250 lines → 80 lines (70% reduction)
|
||||
- **Applications Table**: Complex implementation → clean DataTable configuration
|
||||
- **Status Logic**: ~120 lines of duplicate status functions eliminated
|
||||
- **Sidebar Layout Logic**: ~90 lines of manual margin management replaced with declarative SidebarLayout
|
||||
- **Overall Platform**: Estimated 70-80% reduction in form/table/status/layout boilerplate
|
||||
|
||||
### 🔧 Maintainability Improvements
|
||||
- **Centralized Status Logic**: All status colors managed in single StatusBadge component
|
||||
- **Standardized Layout Behavior**: SidebarLayout ensures consistent sidebar behavior across all microfrontends
|
||||
- **Type Safety**: StatusBadge variants and SidebarLayout props ensure consistent usage patterns
|
||||
- **Easy Updates**: Status color changes and sidebar behavior improvements propagate automatically to all components
|
||||
|
||||
## Current State Assessment
|
||||
|
||||
### ✅ Fully Integrated Components
|
||||
1. **User Management** - Complete FormSidebar and DataTable adoption
|
||||
2. **KMS Applications** - Complete FormSidebar and DataTable adoption
|
||||
3. **FaaS Functions** - DataTable with hybrid FormSidebar approach
|
||||
4. **Demo Application** - Full shared component showcase
|
||||
5. **Shell Dashboard** - Appropriate minimal integration
|
||||
|
||||
### ⚡ StatusBadge Adoption Completed
|
||||
- **FaaS Execution States**: ExecutionStatusBadge integrated
|
||||
- **KMS Audit Logs**: StatusBadge for event status
|
||||
- **Available Variants**: Status, Role, Runtime, Type, Severity, Execution
|
||||
- **Consistent Color Mapping**: Standardized across all business domains
|
||||
|
||||
### 🎨 SidebarLayout Integration Completed
|
||||
- **KMS Applications**: SidebarLayout with ApplicationSidebar (450px width)
|
||||
- **User Management**: SidebarLayout with UserSidebar (400px width)
|
||||
- **FaaS Functions**: Optimized margin calculation for dual fixed sidebars (600px width each)
|
||||
- **Behavior**: Main content shrinks instead of being covered by sidebars
|
||||
- **Mobile Support**: ResponsiveSidebarLayout available for mobile-friendly overlays
|
||||
- **Compatibility**: Works with both new SidebarLayout pattern and existing fixed-positioned sidebars
|
||||
|
||||
### 🔄 Remaining Opportunities
|
||||
1. **Loading/Empty States**: Complex patterns exist but require careful migration
|
||||
2. **Additional Status Types**: Future business modules can extend StatusBadge variants
|
||||
3. **Performance Optimization**: Monitor shared component bundle impact
|
||||
|
||||
## Final Recommendations
|
||||
|
||||
### 🎯 Implementation Complete
|
||||
- **Core Integration**: All major form and table components successfully migrated
|
||||
- **Status Standardization**: Comprehensive StatusBadge adoption across platform
|
||||
- **Pattern Consistency**: Unified approach to CRUD operations and data display
|
||||
|
||||
### 🚀 Future Development Guidelines
|
||||
1. **New Features**: Use shared FormSidebar and DataTable as foundation
|
||||
2. **Status Indicators**: Always use StatusBadge variants for consistent display
|
||||
3. **Component Extensions**: Add new StatusBadge variants for new business domains
|
||||
4. **Loading Patterns**: Consider shared LoadingState for simple use cases
|
||||
|
||||
### 📋 Established Best Practices
|
||||
1. **Declarative Forms**: Use FormSidebar field configuration for all new forms
|
||||
2. **Consistent Tables**: DataTable for all list interfaces with standard actions
|
||||
3. **Status Display**: StatusBadge variants for all status indicators
|
||||
4. **Shared Dependencies**: Maintain version consistency across microfrontends
|
||||
|
||||
## Final Conclusion
|
||||
|
||||
The `@skybridge/web-components` integration has been **fully completed** with comprehensive adoption across all 5 microfrontends. Key achievements:
|
||||
|
||||
- ✅ **Complete Pattern Standardization** across all business applications
|
||||
- ✅ **70-80% code reduction** in form, table, status, and layout components
|
||||
- ✅ **Centralized Status Logic** with StatusBadge variants
|
||||
- ✅ **Standardized Sidebar Behavior** with SidebarLayout preventing content overlap
|
||||
- ✅ **Zero Duplicate Patterns** in major UI components
|
||||
- ✅ **Enhanced Developer Experience** with declarative configurations
|
||||
- ✅ **Consistent User Experience** where sidebars shrink main content instead of covering it
|
||||
- ✅ **Production-Ready Implementation** across entire platform
|
||||
|
||||
The Skybridge platform now has a robust, consistent, and maintainable UI foundation that supports rapid development of new business modules while ensuring visual, functional, and behavioral consistency across the entire startup platform. **The sidebar issue you reported has been completely resolved** - all microfrontends now use the shared SidebarLayout component that properly shrinks the main content area when sidebars are opened.
|
||||
23
TODO.md
23
TODO.md
@ -1,23 +0,0 @@
|
||||
# Skybridge FaaS Implementation Todo List
|
||||
|
||||
## Current Status
|
||||
- [x] Analyzed codebase structure
|
||||
- [x] Identified mock implementations
|
||||
- [x] Located Docker runtime mock
|
||||
|
||||
## Implementation Tasks
|
||||
- [x] Replace mock Docker runtime with real implementation
|
||||
- [x] Implement actual Docker container execution
|
||||
- [x] Add proper error handling for Docker operations
|
||||
- [x] Implement container lifecycle management
|
||||
- [x] Add logging and monitoring capabilities
|
||||
- [x] Test implementation with sample functions
|
||||
- [x] Verify integration with existing services
|
||||
- [x] Fix database scanning error for function timeout
|
||||
- [x] Implement proper error handling for PostgreSQL interval types
|
||||
|
||||
## Enhancement Tasks
|
||||
- [ ] Add support for multiple Docker runtimes
|
||||
- [ ] Implement resource limiting (CPU, memory)
|
||||
- [ ] Add container cleanup mechanisms
|
||||
- [ ] Implement proper security measures
|
||||
343
cmd/server/main.go
Normal file
343
cmd/server/main.go
Normal file
@ -0,0 +1,343 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/kms/api-key-service/internal/audit"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/database"
|
||||
"github.com/kms/api-key-service/internal/domain"
|
||||
"github.com/kms/api-key-service/internal/handlers"
|
||||
"github.com/kms/api-key-service/internal/metrics"
|
||||
"github.com/kms/api-key-service/internal/middleware"
|
||||
"github.com/kms/api-key-service/internal/repository/postgres"
|
||||
"github.com/kms/api-key-service/internal/services"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Initialize configuration
|
||||
cfg := config.NewConfig()
|
||||
if err := cfg.Validate(); err != nil {
|
||||
log.Fatal("Configuration validation failed:", err)
|
||||
}
|
||||
|
||||
// Initialize logger
|
||||
logger := initLogger(cfg)
|
||||
defer logger.Sync()
|
||||
|
||||
logger.Info("Starting API Key Management Service",
|
||||
zap.String("version", cfg.GetString("APP_VERSION")),
|
||||
zap.String("environment", cfg.GetString("APP_ENV")),
|
||||
)
|
||||
|
||||
// Initialize database
|
||||
logger.Info("Connecting to database",
|
||||
zap.String("dsn", cfg.GetDatabaseDSNForLogging()))
|
||||
|
||||
db, err := database.NewPostgresProvider(
|
||||
cfg.GetDatabaseDSN(),
|
||||
cfg.GetInt("DB_MAX_OPEN_CONNS"),
|
||||
cfg.GetInt("DB_MAX_IDLE_CONNS"),
|
||||
cfg.GetString("DB_CONN_MAX_LIFETIME"),
|
||||
)
|
||||
if err != nil {
|
||||
logger.Fatal("Failed to initialize database",
|
||||
zap.String("dsn", cfg.GetDatabaseDSNForLogging()),
|
||||
zap.Error(err))
|
||||
}
|
||||
|
||||
logger.Info("Database connection established successfully")
|
||||
|
||||
// Database migrations are handled by PostgreSQL docker-entrypoint-initdb.d
|
||||
logger.Info("Database migrations are handled by PostgreSQL on container startup")
|
||||
|
||||
// Initialize repositories
|
||||
appRepo := postgres.NewApplicationRepository(db)
|
||||
tokenRepo := postgres.NewStaticTokenRepository(db)
|
||||
permRepo := postgres.NewPermissionRepository(db)
|
||||
grantRepo := postgres.NewGrantedPermissionRepository(db)
|
||||
auditRepo := postgres.NewAuditRepository(db)
|
||||
|
||||
// Initialize audit logger
|
||||
auditLogger := audit.NewAuditLogger(cfg, logger, auditRepo)
|
||||
|
||||
// Initialize services
|
||||
appService := services.NewApplicationService(appRepo, auditRepo, logger)
|
||||
tokenService := services.NewTokenService(tokenRepo, appRepo, permRepo, grantRepo, cfg.GetString("INTERNAL_HMAC_KEY"), cfg, logger)
|
||||
authService := services.NewAuthenticationService(cfg, logger, permRepo)
|
||||
|
||||
// Initialize handlers
|
||||
healthHandler := handlers.NewHealthHandler(db, logger)
|
||||
appHandler := handlers.NewApplicationHandler(appService, authService, logger)
|
||||
tokenHandler := handlers.NewTokenHandler(tokenService, authService, logger)
|
||||
authHandler := handlers.NewAuthHandler(authService, tokenService, cfg, logger)
|
||||
auditHandler := handlers.NewAuditHandler(auditLogger, authService, logger)
|
||||
testHandler := handlers.NewTestHandler(logger)
|
||||
|
||||
// Set up router
|
||||
router := setupRouter(cfg, logger, healthHandler, appHandler, tokenHandler, authHandler, auditHandler, testHandler)
|
||||
|
||||
// Create HTTP server
|
||||
srv := &http.Server{
|
||||
Addr: cfg.GetServerAddress(),
|
||||
Handler: router,
|
||||
ReadTimeout: cfg.GetDuration("SERVER_READ_TIMEOUT"),
|
||||
WriteTimeout: cfg.GetDuration("SERVER_WRITE_TIMEOUT"),
|
||||
IdleTimeout: cfg.GetDuration("SERVER_IDLE_TIMEOUT"),
|
||||
}
|
||||
|
||||
// Initialize bootstrap data
|
||||
logger.Info("Initializing bootstrap data")
|
||||
if err := initializeBootstrapData(context.Background(), appService, tokenService, cfg, logger); err != nil {
|
||||
logger.Fatal("Failed to initialize bootstrap data", zap.Error(err))
|
||||
}
|
||||
|
||||
// Start server in goroutine
|
||||
go func() {
|
||||
logger.Info("Starting HTTP server", zap.String("address", srv.Addr))
|
||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
logger.Fatal("Failed to start server", zap.Error(err))
|
||||
}
|
||||
}()
|
||||
|
||||
// Start metrics server if enabled
|
||||
var metricsSrv *http.Server
|
||||
if cfg.GetBool("METRICS_ENABLED") {
|
||||
metricsSrv = startMetricsServer(cfg, logger)
|
||||
}
|
||||
|
||||
// Wait for interrupt signal to gracefully shutdown the server
|
||||
quit := make(chan os.Signal, 1)
|
||||
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
|
||||
<-quit
|
||||
|
||||
logger.Info("Shutting down server...")
|
||||
|
||||
// Give outstanding requests time to complete
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Shutdown main server
|
||||
if err := srv.Shutdown(ctx); err != nil {
|
||||
logger.Error("Server forced to shutdown", zap.Error(err))
|
||||
}
|
||||
|
||||
// Shutdown metrics server
|
||||
if metricsSrv != nil {
|
||||
if err := metricsSrv.Shutdown(ctx); err != nil {
|
||||
logger.Error("Metrics server forced to shutdown", zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info("Server exited")
|
||||
}
|
||||
|
||||
func initLogger(cfg config.ConfigProvider) *zap.Logger {
|
||||
var logger *zap.Logger
|
||||
var err error
|
||||
|
||||
if cfg.IsProduction() {
|
||||
logger, err = zap.NewProduction()
|
||||
} else {
|
||||
logger, err = zap.NewDevelopment()
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Fatal("Failed to initialize logger:", err)
|
||||
}
|
||||
|
||||
return logger
|
||||
}
|
||||
|
||||
func setupRouter(cfg config.ConfigProvider, logger *zap.Logger, healthHandler *handlers.HealthHandler, appHandler *handlers.ApplicationHandler, tokenHandler *handlers.TokenHandler, authHandler *handlers.AuthHandler, auditHandler *handlers.AuditHandler, testHandler *handlers.TestHandler) *gin.Engine {
|
||||
// Set Gin mode based on environment
|
||||
if cfg.IsProduction() {
|
||||
gin.SetMode(gin.ReleaseMode)
|
||||
}
|
||||
|
||||
router := gin.New()
|
||||
|
||||
// Add middleware
|
||||
router.Use(middleware.Logger(logger))
|
||||
router.Use(middleware.Recovery(logger))
|
||||
router.Use(metrics.Middleware(logger))
|
||||
router.Use(middleware.CORS())
|
||||
router.Use(middleware.Security())
|
||||
router.Use(middleware.ValidateContentType())
|
||||
|
||||
if cfg.GetBool("RATE_LIMIT_ENABLED") {
|
||||
router.Use(middleware.RateLimit(cfg.GetInt("RATE_LIMIT_RPS"), cfg.GetInt("RATE_LIMIT_BURST")))
|
||||
}
|
||||
|
||||
// Health check endpoint (no authentication required)
|
||||
router.GET("/health", healthHandler.Health)
|
||||
router.GET("/ready", healthHandler.Ready)
|
||||
|
||||
// Development/Testing endpoints (no authentication required)
|
||||
if !cfg.IsProduction() {
|
||||
router.GET("/test/sso", testHandler.SSOTestPage)
|
||||
}
|
||||
|
||||
// API routes
|
||||
api := router.Group("/api")
|
||||
{
|
||||
// Authentication endpoints (no prior auth required)
|
||||
api.GET("/login", authHandler.Login) // HTML page for browser access
|
||||
api.POST("/login", authHandler.Login) // JSON API for programmatic access
|
||||
api.POST("/verify", authHandler.Verify)
|
||||
api.POST("/renew", authHandler.Renew)
|
||||
|
||||
// Protected routes (require authentication)
|
||||
protected := api.Group("/")
|
||||
protected.Use(middleware.Authentication(cfg, logger))
|
||||
{
|
||||
// Application management
|
||||
protected.GET("/applications", appHandler.List)
|
||||
protected.POST("/applications", appHandler.Create)
|
||||
protected.GET("/applications/:id", appHandler.GetByID)
|
||||
protected.PUT("/applications/:id", appHandler.Update)
|
||||
protected.DELETE("/applications/:id", appHandler.Delete)
|
||||
|
||||
// Token management
|
||||
protected.GET("/applications/:id/tokens", tokenHandler.ListByApp)
|
||||
protected.POST("/applications/:id/tokens", tokenHandler.Create)
|
||||
protected.DELETE("/tokens/:id", tokenHandler.Delete)
|
||||
|
||||
// Audit management
|
||||
protected.GET("/audit/events", auditHandler.ListEvents)
|
||||
protected.GET("/audit/events/:id", auditHandler.GetEvent)
|
||||
protected.GET("/audit/stats", auditHandler.GetStats)
|
||||
|
||||
// Documentation endpoint
|
||||
protected.GET("/docs", func(c *gin.Context) {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"service": "API Key Management Service",
|
||||
"version": cfg.GetString("APP_VERSION"),
|
||||
"documentation": "See README.md and docs/ directory",
|
||||
"endpoints": map[string]interface{}{
|
||||
"authentication": []string{
|
||||
"POST /api/login",
|
||||
"POST /api/verify",
|
||||
"POST /api/renew",
|
||||
},
|
||||
"applications": []string{
|
||||
"GET /api/applications",
|
||||
"POST /api/applications",
|
||||
"GET /api/applications/:id",
|
||||
"PUT /api/applications/:id",
|
||||
"DELETE /api/applications/:id",
|
||||
},
|
||||
"tokens": []string{
|
||||
"GET /api/applications/:id/tokens",
|
||||
"POST /api/applications/:id/tokens",
|
||||
"DELETE /api/tokens/:id",
|
||||
},
|
||||
"audit": []string{
|
||||
"GET /api/audit/events",
|
||||
"GET /api/audit/events/:id",
|
||||
"GET /api/audit/stats",
|
||||
},
|
||||
},
|
||||
})
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
return router
|
||||
}
|
||||
|
||||
func startMetricsServer(cfg config.ConfigProvider, logger *zap.Logger) *http.Server {
|
||||
mux := http.NewServeMux()
|
||||
|
||||
// Prometheus metrics endpoint
|
||||
mux.HandleFunc("/metrics", metrics.PrometheusHandler())
|
||||
|
||||
// Health endpoint for metrics server
|
||||
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Write([]byte("OK"))
|
||||
})
|
||||
|
||||
srv := &http.Server{
|
||||
Addr: cfg.GetMetricsAddress(),
|
||||
Handler: mux,
|
||||
}
|
||||
|
||||
go func() {
|
||||
logger.Info("Starting metrics server", zap.String("address", srv.Addr))
|
||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
logger.Error("Failed to start metrics server", zap.Error(err))
|
||||
}
|
||||
}()
|
||||
|
||||
return srv
|
||||
}
|
||||
|
||||
func initializeBootstrapData(ctx context.Context, appService services.ApplicationService, tokenService services.TokenService, cfg config.ConfigProvider, logger *zap.Logger) error {
|
||||
// Check if internal application already exists
|
||||
internalAppID := cfg.GetString("INTERNAL_APP_ID")
|
||||
_, err := appService.GetByID(ctx, internalAppID)
|
||||
if err == nil {
|
||||
logger.Info("Internal application already exists, skipping bootstrap")
|
||||
return nil
|
||||
}
|
||||
|
||||
logger.Info("Creating internal application for bootstrap", zap.String("app_id", internalAppID))
|
||||
|
||||
// Create internal application for system operations
|
||||
internalAppReq := &domain.CreateApplicationRequest{
|
||||
AppID: internalAppID,
|
||||
AppLink: "https://kms.internal/system",
|
||||
Type: []domain.ApplicationType{domain.ApplicationTypeStatic, domain.ApplicationTypeUser},
|
||||
CallbackURL: "https://kms.internal/callback",
|
||||
TokenPrefix: "KMS",
|
||||
TokenRenewalDuration: domain.Duration{Duration: 365 * 24 * time.Hour}, // 1 year
|
||||
MaxTokenDuration: domain.Duration{Duration: 365 * 24 * time.Hour}, // 1 year
|
||||
Owner: domain.Owner{
|
||||
Type: domain.OwnerTypeTeam,
|
||||
Name: "KMS System",
|
||||
Owner: "system@kms.internal",
|
||||
},
|
||||
}
|
||||
|
||||
app, err := appService.Create(ctx, internalAppReq, "system")
|
||||
if err != nil {
|
||||
logger.Error("Failed to create internal application", zap.Error(err))
|
||||
return err
|
||||
}
|
||||
|
||||
logger.Info("Internal application created successfully",
|
||||
zap.String("app_id", app.AppID),
|
||||
zap.String("hmac_key", app.HMACKey))
|
||||
|
||||
// Create a static token for internal system operations if needed
|
||||
internalTokenReq := &domain.CreateStaticTokenRequest{
|
||||
AppID: internalAppID,
|
||||
Owner: domain.Owner{
|
||||
Type: domain.OwnerTypeTeam,
|
||||
Name: "KMS System Token",
|
||||
Owner: "system@kms.internal",
|
||||
},
|
||||
Permissions: []string{"internal.*", "app.*", "token.*", "audit.*"},
|
||||
}
|
||||
|
||||
token, err := tokenService.CreateStaticToken(ctx, internalTokenReq, "system")
|
||||
if err != nil {
|
||||
logger.Warn("Failed to create internal system token, continuing...", zap.Error(err))
|
||||
} else {
|
||||
logger.Info("Internal system token created successfully",
|
||||
zap.String("token_id", token.ID.String()))
|
||||
}
|
||||
|
||||
logger.Info("Bootstrap data initialization completed successfully")
|
||||
return nil
|
||||
}
|
||||
1
demo/.gitignore
vendored
1
demo/.gitignore
vendored
@ -1 +0,0 @@
|
||||
node_modules
|
||||
5764
demo/package-lock.json
generated
5764
demo/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -1,36 +0,0 @@
|
||||
{
|
||||
"name": "demo",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"start": "webpack serve --mode development",
|
||||
"build": "webpack --mode production",
|
||||
"dev": "webpack serve --mode development"
|
||||
},
|
||||
"dependencies": {
|
||||
"react": "^18.2.0",
|
||||
"react-dom": "^18.2.0",
|
||||
"@mantine/core": "^7.0.0",
|
||||
"@mantine/hooks": "^7.0.0",
|
||||
"@mantine/notifications": "^7.0.0",
|
||||
"@mantine/form": "^7.0.0",
|
||||
"@mantine/modals": "^7.0.0",
|
||||
"@tabler/icons-react": "^2.40.0",
|
||||
"@skybridge/web-components": "workspace:*"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/core": "^7.20.12",
|
||||
"@babel/preset-react": "^7.18.6",
|
||||
"@babel/preset-typescript": "^7.18.6",
|
||||
"@types/react": "^18.0.28",
|
||||
"@types/react-dom": "^18.0.11",
|
||||
"babel-loader": "^9.1.2",
|
||||
"css-loader": "^6.7.3",
|
||||
"html-webpack-plugin": "^5.5.0",
|
||||
"style-loader": "^3.3.1",
|
||||
"typescript": "^4.9.5",
|
||||
"webpack": "^5.75.0",
|
||||
"webpack-cli": "^5.0.1",
|
||||
"webpack-dev-server": "^4.7.4"
|
||||
}
|
||||
}
|
||||
@ -1,11 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>Demo App</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
</body>
|
||||
</html>
|
||||
272
demo/src/App.tsx
272
demo/src/App.tsx
@ -1,272 +0,0 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import {
|
||||
Container,
|
||||
Title,
|
||||
Text,
|
||||
Card,
|
||||
SimpleGrid,
|
||||
Group,
|
||||
Badge,
|
||||
Button,
|
||||
Stack,
|
||||
Progress,
|
||||
ActionIcon,
|
||||
Paper,
|
||||
Divider,
|
||||
Alert,
|
||||
} from '@mantine/core';
|
||||
import {
|
||||
IconRocket,
|
||||
IconChartLine,
|
||||
IconUsers,
|
||||
IconRefresh,
|
||||
IconCheck,
|
||||
IconInfoCircle,
|
||||
IconPlus,
|
||||
} from '@tabler/icons-react';
|
||||
import {
|
||||
DataTable,
|
||||
TableColumn,
|
||||
FormSidebar,
|
||||
FormField
|
||||
} from '@skybridge/web-components';
|
||||
|
||||
const DemoApp: React.FC = () => {
|
||||
const [progress, setProgress] = useState(0);
|
||||
const [isLoading, setIsLoading] = useState(false);
|
||||
const [showTable, setShowTable] = useState(false);
|
||||
const [sidebarOpened, setSidebarOpened] = useState(false);
|
||||
const [demoData, setDemoData] = useState([
|
||||
{ id: '1', name: 'John Doe', email: 'john@example.com', status: 'active', role: 'admin', created_at: '2024-01-15' },
|
||||
{ id: '2', name: 'Jane Smith', email: 'jane@example.com', status: 'active', role: 'user', created_at: '2024-02-20' },
|
||||
{ id: '3', name: 'Bob Johnson', email: 'bob@example.com', status: 'inactive', role: 'viewer', created_at: '2024-03-10' },
|
||||
]);
|
||||
|
||||
useEffect(() => {
|
||||
const timer = setInterval(() => {
|
||||
setProgress((prev) => (prev >= 100 ? 0 : prev + 1));
|
||||
}, 100);
|
||||
return () => clearInterval(timer);
|
||||
}, []);
|
||||
|
||||
const handleRefresh = () => {
|
||||
setIsLoading(true);
|
||||
setTimeout(() => setIsLoading(false), 1500);
|
||||
};
|
||||
|
||||
const stats = [
|
||||
{ label: 'Active Users', value: '1,234', icon: IconUsers, color: 'blue' },
|
||||
{ label: 'Total Revenue', value: '$45,678', icon: IconChartLine, color: 'green' },
|
||||
{ label: 'Projects', value: '89', icon: IconRocket, color: 'orange' },
|
||||
];
|
||||
|
||||
const features = [
|
||||
{ title: 'Real-time Analytics', description: 'Monitor your data in real-time' },
|
||||
{ title: 'Team Collaboration', description: 'Work together seamlessly' },
|
||||
{ title: 'Cloud Integration', description: 'Connect with cloud services' },
|
||||
{ title: 'Custom Reports', description: 'Generate detailed reports' },
|
||||
];
|
||||
|
||||
const tableColumns: TableColumn[] = [
|
||||
{ key: 'name', label: 'Name', sortable: true },
|
||||
{ key: 'email', label: 'Email', sortable: true },
|
||||
{ key: 'role', label: 'Role', render: (value) => <Badge variant="light" size="sm">{value}</Badge> },
|
||||
{ key: 'status', label: 'Status' }, // Uses default status rendering
|
||||
{ key: 'created_at', label: 'Created', render: (value) => new Date(value).toLocaleDateString() },
|
||||
];
|
||||
|
||||
const formFields: FormField[] = [
|
||||
{ name: 'name', label: 'Full Name', type: 'text', required: true, placeholder: 'Enter full name' },
|
||||
{ name: 'email', label: 'Email', type: 'email', required: true, placeholder: 'Enter email address', validation: { email: true } },
|
||||
{
|
||||
name: 'role',
|
||||
label: 'Role',
|
||||
type: 'select',
|
||||
required: true,
|
||||
options: [
|
||||
{ value: 'admin', label: 'Admin' },
|
||||
{ value: 'user', label: 'User' },
|
||||
{ value: 'viewer', label: 'Viewer' },
|
||||
],
|
||||
defaultValue: 'user'
|
||||
},
|
||||
{
|
||||
name: 'status',
|
||||
label: 'Status',
|
||||
type: 'select',
|
||||
required: true,
|
||||
options: [
|
||||
{ value: 'active', label: 'Active' },
|
||||
{ value: 'inactive', label: 'Inactive' },
|
||||
],
|
||||
defaultValue: 'active'
|
||||
},
|
||||
];
|
||||
|
||||
const handleFormSubmit = async (values: any) => {
|
||||
// Simulate API call
|
||||
console.log('Form submitted:', values);
|
||||
const newItem = {
|
||||
id: Date.now().toString(),
|
||||
...values,
|
||||
created_at: new Date().toISOString().split('T')[0]
|
||||
};
|
||||
setDemoData([...demoData, newItem]);
|
||||
};
|
||||
|
||||
const handleEdit = (item: any) => {
|
||||
console.log('Edit item:', item);
|
||||
// Would normally open form with item data
|
||||
setSidebarOpened(true);
|
||||
};
|
||||
|
||||
const handleDelete = async (item: any) => {
|
||||
setDemoData(demoData.filter(d => d.id !== item.id));
|
||||
};
|
||||
|
||||
return (
|
||||
<Container size="xl" py="xl">
|
||||
<Stack gap="xl">
|
||||
<Group justify="space-between" align="center">
|
||||
<div>
|
||||
<Title order={1}>Demo Application</Title>
|
||||
<Text c="dimmed" size="lg" mt="xs">
|
||||
A sample federated application showcasing module federation
|
||||
</Text>
|
||||
</div>
|
||||
<ActionIcon
|
||||
size="lg"
|
||||
variant="light"
|
||||
loading={isLoading}
|
||||
onClick={handleRefresh}
|
||||
>
|
||||
<IconRefresh size={18} />
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
|
||||
<Alert icon={<IconInfoCircle size={16} />} title="Welcome!" color="blue" variant="light">
|
||||
This is a demo application loaded via Module Federation. It demonstrates how
|
||||
microfrontends can be seamlessly integrated into the shell application.
|
||||
</Alert>
|
||||
|
||||
<SimpleGrid cols={{ base: 1, sm: 3 }} spacing="md">
|
||||
{stats.map((stat) => (
|
||||
<Paper key={stat.label} p="md" radius="md" withBorder>
|
||||
<Group justify="space-between">
|
||||
<div>
|
||||
<Text c="dimmed" size="sm" fw={500} tt="uppercase">
|
||||
{stat.label}
|
||||
</Text>
|
||||
<Text fw={700} size="xl">
|
||||
{stat.value}
|
||||
</Text>
|
||||
</div>
|
||||
<stat.icon size={24} color={`var(--mantine-color-${stat.color}-6)`} />
|
||||
</Group>
|
||||
</Paper>
|
||||
))}
|
||||
</SimpleGrid>
|
||||
|
||||
<Card shadow="sm" padding="lg" radius="md" withBorder>
|
||||
<Card.Section withBorder inheritPadding py="xs">
|
||||
<Group justify="space-between">
|
||||
<Text fw={500}>System Performance</Text>
|
||||
<Badge color="green" variant="light">
|
||||
Healthy
|
||||
</Badge>
|
||||
</Group>
|
||||
</Card.Section>
|
||||
|
||||
<Card.Section inheritPadding py="md">
|
||||
<Stack gap="xs">
|
||||
<Text size="sm" c="dimmed">
|
||||
CPU Usage: {progress.toFixed(1)}%
|
||||
</Text>
|
||||
<Progress value={progress} size="sm" color="blue" animated />
|
||||
</Stack>
|
||||
</Card.Section>
|
||||
</Card>
|
||||
|
||||
<div>
|
||||
<Title order={2} mb="md">Features</Title>
|
||||
<SimpleGrid cols={{ base: 1, sm: 2 }} spacing="md">
|
||||
{features.map((feature, index) => (
|
||||
<Card key={index} shadow="sm" padding="lg" radius="md" withBorder>
|
||||
<Group mb="xs">
|
||||
<IconCheck size={16} color="var(--mantine-color-green-6)" />
|
||||
<Text fw={500}>{feature.title}</Text>
|
||||
</Group>
|
||||
<Text size="sm" c="dimmed">
|
||||
{feature.description}
|
||||
</Text>
|
||||
</Card>
|
||||
))}
|
||||
</SimpleGrid>
|
||||
</div>
|
||||
|
||||
<Divider />
|
||||
|
||||
<div>
|
||||
<Title order={2} mb="md">Shared Components Demo</Title>
|
||||
<Text c="dimmed" mb="lg">
|
||||
Demonstration of shared components from @skybridge/web-components
|
||||
</Text>
|
||||
|
||||
<Group mb="md">
|
||||
<Button
|
||||
leftSection={<IconPlus size={16} />}
|
||||
onClick={() => setShowTable(!showTable)}
|
||||
>
|
||||
{showTable ? 'Hide' : 'Show'} DataTable Demo
|
||||
</Button>
|
||||
<Button
|
||||
variant="outline"
|
||||
onClick={() => setSidebarOpened(true)}
|
||||
>
|
||||
Show FormSidebar Demo
|
||||
</Button>
|
||||
</Group>
|
||||
|
||||
{showTable && (
|
||||
<DataTable
|
||||
data={demoData}
|
||||
columns={tableColumns}
|
||||
title="Demo User Management"
|
||||
searchable
|
||||
onAdd={() => setSidebarOpened(true)}
|
||||
onEdit={handleEdit}
|
||||
onDelete={handleDelete}
|
||||
emptyMessage="No demo data available"
|
||||
/>
|
||||
)}
|
||||
</div>
|
||||
|
||||
<Divider />
|
||||
|
||||
<Group justify="center">
|
||||
<Button variant="outline" size="md">
|
||||
View Documentation
|
||||
</Button>
|
||||
<Button size="md">
|
||||
Get Started
|
||||
</Button>
|
||||
</Group>
|
||||
</Stack>
|
||||
|
||||
<FormSidebar
|
||||
opened={sidebarOpened}
|
||||
onClose={() => setSidebarOpened(false)}
|
||||
onSuccess={() => {
|
||||
setSidebarOpened(false);
|
||||
setShowTable(true); // Show table after successful form submission
|
||||
}}
|
||||
title="Demo User"
|
||||
fields={formFields}
|
||||
onSubmit={handleFormSubmit}
|
||||
width={400}
|
||||
/>
|
||||
</Container>
|
||||
);
|
||||
};
|
||||
|
||||
export default DemoApp;
|
||||
@ -1,14 +0,0 @@
|
||||
import React from 'react';
|
||||
import ReactDOM from 'react-dom/client';
|
||||
import { MantineProvider } from '@mantine/core';
|
||||
import App from './App';
|
||||
import '@mantine/core/styles.css';
|
||||
|
||||
const root = ReactDOM.createRoot(document.getElementById('root') as HTMLElement);
|
||||
root.render(
|
||||
<React.StrictMode>
|
||||
<MantineProvider>
|
||||
<App />
|
||||
</MantineProvider>
|
||||
</React.StrictMode>
|
||||
);
|
||||
@ -1,77 +0,0 @@
|
||||
const HtmlWebpackPlugin = require('html-webpack-plugin');
|
||||
const { ModuleFederationPlugin } = require('webpack').container;
|
||||
|
||||
// Import the microfrontends registry
|
||||
const { getExposesConfig } = require('../web/src/microfrontends.js');
|
||||
|
||||
module.exports = {
|
||||
mode: 'development',
|
||||
entry: './src/index.tsx',
|
||||
devServer: {
|
||||
port: 3001,
|
||||
headers: {
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
},
|
||||
},
|
||||
resolve: {
|
||||
extensions: ['.tsx', '.ts', '.js', '.jsx'],
|
||||
},
|
||||
module: {
|
||||
rules: [
|
||||
{
|
||||
test: /\.(js|jsx|ts|tsx)$/,
|
||||
exclude: /node_modules/,
|
||||
use: {
|
||||
loader: 'babel-loader',
|
||||
options: {
|
||||
presets: [
|
||||
'@babel/preset-react',
|
||||
'@babel/preset-typescript',
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
test: /\.css$/,
|
||||
use: ['style-loader', 'css-loader'],
|
||||
},
|
||||
],
|
||||
},
|
||||
plugins: [
|
||||
new ModuleFederationPlugin({
|
||||
name: 'demo',
|
||||
filename: 'remoteEntry.js',
|
||||
exposes: getExposesConfig('demo'),
|
||||
shared: {
|
||||
react: {
|
||||
singleton: true,
|
||||
requiredVersion: '^18.2.0',
|
||||
eager: false,
|
||||
},
|
||||
'react-dom': {
|
||||
singleton: true,
|
||||
requiredVersion: '^18.2.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/core': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/hooks': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@tabler/icons-react': {
|
||||
singleton: true,
|
||||
requiredVersion: '^2.40.0',
|
||||
eager: false,
|
||||
},
|
||||
},
|
||||
}),
|
||||
new HtmlWebpackPlugin({
|
||||
template: './public/index.html',
|
||||
}),
|
||||
],
|
||||
};
|
||||
22
docker-compose.sso.yml
Normal file
22
docker-compose.sso.yml
Normal file
@ -0,0 +1,22 @@
|
||||
version: '3.8'
|
||||
|
||||
# Override file for enabling SSO testing
|
||||
# Usage: podman-compose -f docker-compose.yml -f docker-compose.sso.yml up -d
|
||||
|
||||
services:
|
||||
api-service:
|
||||
environment:
|
||||
# Enable OAuth2 for Keycloak testing
|
||||
OAUTH2_ENABLED: true
|
||||
# Enable SAML for SimpleSAMLphp testing
|
||||
SAML_ENABLED: true
|
||||
# Switch to SSO auth provider instead of header
|
||||
AUTH_PROVIDER: sso
|
||||
# Set the required SSO configuration
|
||||
SSO_PROVIDER_URL: http://keycloak:8080/realms/kms
|
||||
SSO_CLIENT_ID: kms-api
|
||||
SSO_CLIENT_SECRET: kms-client-secret
|
||||
SSO_REDIRECT_URL: http://localhost:8081/api/sso/callback
|
||||
depends_on:
|
||||
- keycloak
|
||||
- saml-idp
|
||||
@ -1,8 +1,7 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
# PostgreSQL Database for KMS
|
||||
kms-postgres:
|
||||
postgres:
|
||||
image: docker.io/library/postgres:15-alpine
|
||||
container_name: kms-postgres
|
||||
environment:
|
||||
@ -12,46 +11,38 @@ services:
|
||||
ports:
|
||||
- "5432:5432"
|
||||
volumes:
|
||||
- kms_postgres_data:/var/lib/postgresql/data
|
||||
- ./kms/migrations:/docker-entrypoint-initdb.d:Z
|
||||
- postgres_data:/var/lib/postgresql/data
|
||||
- ./migrations:/docker-entrypoint-initdb.d:Z
|
||||
networks:
|
||||
- skybridge-network
|
||||
- kms-network
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U postgres -d kms"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
# PostgreSQL Database for FaaS
|
||||
faas-postgres:
|
||||
image: docker.io/library/postgres:15-alpine
|
||||
container_name: faas-postgres
|
||||
environment:
|
||||
POSTGRES_DB: faas
|
||||
POSTGRES_USER: postgres
|
||||
POSTGRES_PASSWORD: postgres
|
||||
nginx:
|
||||
image: docker.io/library/nginx:alpine
|
||||
container_name: kms-nginx
|
||||
ports:
|
||||
- "5433:5432"
|
||||
- "8081:80"
|
||||
volumes:
|
||||
- faas_postgres_data:/var/lib/postgresql/data
|
||||
- ./faas/migrations:/docker-entrypoint-initdb.d:Z
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro,Z
|
||||
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro,Z
|
||||
depends_on:
|
||||
- api-service
|
||||
- frontend
|
||||
networks:
|
||||
- skybridge-network
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U postgres -d faas"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
- kms-network
|
||||
|
||||
# KMS API Service
|
||||
kms-api-service:
|
||||
api-service:
|
||||
build:
|
||||
context: ./kms
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: kms-api-service
|
||||
environment:
|
||||
APP_ENV: development
|
||||
DB_HOST: kms-postgres
|
||||
DB_HOST: postgres
|
||||
DB_PORT: 5432
|
||||
DB_NAME: kms
|
||||
DB_USER: postgres
|
||||
@ -72,137 +63,79 @@ services:
|
||||
RATE_LIMIT_ENABLED: true
|
||||
CACHE_ENABLED: false
|
||||
METRICS_ENABLED: true
|
||||
# OAuth2 / OIDC Configuration (for Keycloak)
|
||||
OAUTH2_ENABLED: false
|
||||
OAUTH2_PROVIDER_URL: http://keycloak:8080/realms/kms
|
||||
OAUTH2_CLIENT_ID: kms-api
|
||||
OAUTH2_CLIENT_SECRET: kms-client-secret
|
||||
OAUTH2_REDIRECT_URL: http://localhost:8081/api/oauth2/callback
|
||||
# SAML Configuration (for SimpleSAMLphp)
|
||||
SAML_ENABLED: false
|
||||
SAML_IDP_SSO_URL: http://saml-idp:8080/simplesaml/saml2/idp/SSOService.php
|
||||
SAML_IDP_METADATA_URL: http://saml-idp:8080/simplesaml/saml2/idp/metadata.php
|
||||
SAML_SP_ENTITY_ID: http://localhost:8081
|
||||
SAML_SP_ACS_URL: http://localhost:8081/api/saml/acs
|
||||
SAML_SP_SLS_URL: http://localhost:8081/api/saml/sls
|
||||
ports:
|
||||
- "8080:8080"
|
||||
- "9090:9090" # Metrics port
|
||||
depends_on:
|
||||
kms-postgres:
|
||||
postgres:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- skybridge-network
|
||||
- kms-network
|
||||
volumes:
|
||||
- ./kms/migrations:/app/migrations:ro,Z
|
||||
- ./migrations:/app/migrations:ro,Z
|
||||
restart: unless-stopped
|
||||
|
||||
# FaaS API Service
|
||||
faas-api-service:
|
||||
frontend:
|
||||
build:
|
||||
context: ./faas
|
||||
dockerfile: Dockerfile
|
||||
container_name: faas-api-service
|
||||
environment:
|
||||
FAAS_APP_ENV: development
|
||||
FAAS_DB_HOST: faas-postgres
|
||||
FAAS_DB_PORT: 5432
|
||||
FAAS_DB_NAME: faas
|
||||
FAAS_DB_USER: postgres
|
||||
FAAS_DB_PASSWORD: postgres
|
||||
FAAS_DB_SSLMODE: disable
|
||||
DB_CONN_MAX_LIFETIME: 5m
|
||||
DB_MAX_OPEN_CONNS: 25
|
||||
DB_MAX_IDLE_CONNS: 5
|
||||
FAAS_SERVER_HOST: 0.0.0.0
|
||||
FAAS_SERVER_PORT: 8082
|
||||
FAAS_LOG_LEVEL: debug
|
||||
FAAS_DEFAULT_RUNTIME: docker
|
||||
FAAS_FUNCTION_TIMEOUT: 300s
|
||||
FAAS_MAX_MEMORY: 3008
|
||||
FAAS_MAX_CONCURRENT: 100
|
||||
FAAS_SANDBOX_ENABLED: true
|
||||
FAAS_NETWORK_ISOLATION: true
|
||||
FAAS_RESOURCE_LIMITS: true
|
||||
AUTH_PROVIDER: header
|
||||
AUTH_HEADER_USER_EMAIL: X-User-Email
|
||||
RATE_LIMIT_ENABLED: true
|
||||
METRICS_ENABLED: true
|
||||
METRICS_PORT: 9091
|
||||
ports:
|
||||
- "8082:8082"
|
||||
- "9091:9091" # Metrics port
|
||||
depends_on:
|
||||
faas-postgres:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- skybridge-network
|
||||
volumes:
|
||||
- /var/run/docker.sock:/var/run/docker.sock:ro # For Docker runtime
|
||||
- ./faas/migrations:/app/migrations:ro,Z
|
||||
restart: unless-stopped
|
||||
|
||||
# Shell Dashboard
|
||||
shell-dashboard:
|
||||
build:
|
||||
context: ./web
|
||||
dockerfile: Dockerfile
|
||||
container_name: shell-dashboard
|
||||
ports:
|
||||
- "3000:80"
|
||||
networks:
|
||||
- skybridge-network
|
||||
restart: unless-stopped
|
||||
|
||||
# Demo Module
|
||||
demo-module:
|
||||
build:
|
||||
context: ./demo
|
||||
dockerfile: Dockerfile
|
||||
container_name: demo-module
|
||||
ports:
|
||||
- "3001:80"
|
||||
networks:
|
||||
- skybridge-network
|
||||
restart: unless-stopped
|
||||
|
||||
# KMS Frontend
|
||||
kms-frontend:
|
||||
build:
|
||||
context: ./kms/web
|
||||
context: ./kms-frontend
|
||||
dockerfile: Dockerfile
|
||||
container_name: kms-frontend
|
||||
ports:
|
||||
- "3002:80"
|
||||
- "3000:80"
|
||||
networks:
|
||||
- skybridge-network
|
||||
- kms-network
|
||||
restart: unless-stopped
|
||||
|
||||
# FaaS Frontend
|
||||
faas-frontend:
|
||||
build:
|
||||
context: ./faas/web
|
||||
dockerfile: Dockerfile
|
||||
container_name: faas-frontend
|
||||
# Keycloak OAuth2/OIDC Identity Provider for testing
|
||||
keycloak:
|
||||
image: quay.io/keycloak/keycloak:25.0.2
|
||||
container_name: kms-keycloak
|
||||
environment:
|
||||
KEYCLOAK_ADMIN: admin
|
||||
KEYCLOAK_ADMIN_PASSWORD: admin
|
||||
KC_DB: dev-file
|
||||
ports:
|
||||
- "3003:80"
|
||||
- "8090:8080"
|
||||
networks:
|
||||
- skybridge-network
|
||||
restart: unless-stopped
|
||||
|
||||
# Nginx Reverse Proxy
|
||||
nginx:
|
||||
image: docker.io/library/nginx:alpine
|
||||
container_name: skybridge-nginx
|
||||
ports:
|
||||
- "8081:80"
|
||||
- kms-network
|
||||
command: ["start-dev", "--import-realm"]
|
||||
volumes:
|
||||
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro,Z
|
||||
- ./nginx/default.conf:/etc/nginx/conf.d/default.conf:ro,Z
|
||||
depends_on:
|
||||
- kms-api-service
|
||||
- faas-api-service
|
||||
- shell-dashboard
|
||||
- demo-module
|
||||
- kms-frontend
|
||||
- faas-frontend
|
||||
- ./sso-config/keycloak:/opt/keycloak/data/import:Z
|
||||
restart: unless-stopped
|
||||
|
||||
# SimpleSAMLphp SAML Identity Provider for testing
|
||||
saml-idp:
|
||||
image: kristophjunge/test-saml-idp:1.15
|
||||
container_name: kms-saml-idp
|
||||
environment:
|
||||
SIMPLESAMLPHP_SP_ENTITY_ID: http://localhost:8081
|
||||
SIMPLESAMLPHP_SP_ASSERTION_CONSUMER_SERVICE: http://localhost:8081/api/saml/acs
|
||||
SIMPLESAMLPHP_SP_SINGLE_LOGOUT_SERVICE: http://localhost:8081/api/saml/sls
|
||||
SIMPLESAMLPHP_TRUSTED_DOMAINS: '["localhost", "kms-api-service", "kms-nginx"]'
|
||||
ports:
|
||||
- "8091:8080"
|
||||
- "8443:8443"
|
||||
networks:
|
||||
- skybridge-network
|
||||
- kms-network
|
||||
restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
kms_postgres_data:
|
||||
driver: local
|
||||
faas_postgres_data:
|
||||
postgres_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
skybridge-network:
|
||||
driver: bridge
|
||||
kms-network:
|
||||
driver: bridge
|
||||
|
||||
@ -1,297 +0,0 @@
|
||||
# Skybridge FaaS Client
|
||||
|
||||
A lightweight Go client library for interacting with the Skybridge Function-as-a-Service (FaaS) platform.
|
||||
|
||||
## Installation
|
||||
|
||||
```bash
|
||||
go get github.com/RyanCopley/skybridge/faas-client
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
### Basic Client Setup
|
||||
|
||||
```go
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
|
||||
faasclient "github.com/RyanCopley/skybridge/faas-client"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Create a new client
|
||||
client := faasclient.NewClient(
|
||||
"http://localhost:8080", // FaaS API base URL
|
||||
faasclient.WithUserEmail("admin@example.com"), // Authentication
|
||||
)
|
||||
|
||||
// Use the client...
|
||||
}
|
||||
```
|
||||
|
||||
### Authentication Options
|
||||
|
||||
```go
|
||||
// Using user email header (for development)
|
||||
client := faasclient.NewClient(baseURL,
|
||||
faasclient.WithUserEmail("user@example.com"))
|
||||
|
||||
// Using custom auth headers
|
||||
client := faasclient.NewClient(baseURL,
|
||||
faasclient.WithAuth(map[string]string{
|
||||
"Authorization": "Bearer " + token,
|
||||
"X-User-Email": "user@example.com",
|
||||
}))
|
||||
|
||||
// Using custom HTTP client
|
||||
httpClient := &http.Client{Timeout: 30 * time.Second}
|
||||
client := faasclient.NewClient(baseURL,
|
||||
faasclient.WithHTTPClient(httpClient))
|
||||
```
|
||||
|
||||
### Function Management
|
||||
|
||||
#### Creating a Function
|
||||
|
||||
```go
|
||||
req := &faasclient.CreateFunctionRequest{
|
||||
Name: "my-function",
|
||||
AppID: "my-app",
|
||||
Runtime: faasclient.RuntimeNodeJS18,
|
||||
Image: "node:18-alpine", // Optional, auto-selected if not provided
|
||||
Handler: "index.handler",
|
||||
Code: "exports.handler = async (event) => { return 'Hello World'; }",
|
||||
Environment: map[string]string{
|
||||
"NODE_ENV": "production",
|
||||
},
|
||||
Timeout: faasclient.Duration(30 * time.Second),
|
||||
Memory: 512,
|
||||
Owner: faasclient.Owner{
|
||||
Type: faasclient.OwnerTypeIndividual,
|
||||
Name: "John Doe",
|
||||
Owner: "john@example.com",
|
||||
},
|
||||
}
|
||||
|
||||
function, err := client.CreateFunction(context.Background(), req)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
log.Printf("Created function: %s (ID: %s)", function.Name, function.ID)
|
||||
```
|
||||
|
||||
#### Getting a Function
|
||||
|
||||
```go
|
||||
function, err := client.GetFunction(context.Background(), functionID)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
log.Printf("Function: %s, Runtime: %s", function.Name, function.Runtime)
|
||||
```
|
||||
|
||||
#### Updating a Function
|
||||
|
||||
```go
|
||||
newTimeout := faasclient.Duration(60 * time.Second)
|
||||
req := &faasclient.UpdateFunctionRequest{
|
||||
Timeout: &newTimeout,
|
||||
Environment: map[string]string{
|
||||
"NODE_ENV": "development",
|
||||
},
|
||||
}
|
||||
|
||||
function, err := client.UpdateFunction(context.Background(), functionID, req)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
#### Listing Functions
|
||||
|
||||
```go
|
||||
response, err := client.ListFunctions(context.Background(), "my-app", 50, 0)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
for _, fn := range response.Functions {
|
||||
log.Printf("Function: %s (ID: %s)", fn.Name, fn.ID)
|
||||
}
|
||||
```
|
||||
|
||||
#### Deleting a Function
|
||||
|
||||
```go
|
||||
err := client.DeleteFunction(context.Background(), functionID)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
### Function Deployment
|
||||
|
||||
```go
|
||||
// Deploy with default options
|
||||
resp, err := client.DeployFunction(context.Background(), functionID, nil)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// Force deployment
|
||||
req := &faasclient.DeployFunctionRequest{
|
||||
Force: true,
|
||||
}
|
||||
resp, err = client.DeployFunction(context.Background(), functionID, req)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
log.Printf("Deployment status: %s", resp.Status)
|
||||
```
|
||||
|
||||
### Function Execution
|
||||
|
||||
#### Synchronous Execution
|
||||
|
||||
```go
|
||||
input := json.RawMessage(`{"name": "World"}`)
|
||||
req := &faasclient.ExecuteFunctionRequest{
|
||||
FunctionID: functionID,
|
||||
Input: input,
|
||||
Async: false,
|
||||
}
|
||||
|
||||
response, err := client.ExecuteFunction(context.Background(), req)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
log.Printf("Result: %s", string(response.Output))
|
||||
log.Printf("Duration: %v", response.Duration)
|
||||
```
|
||||
|
||||
#### Asynchronous Execution
|
||||
|
||||
```go
|
||||
input := json.RawMessage(`{"name": "World"}`)
|
||||
response, err := client.InvokeFunction(context.Background(), functionID, input)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
log.Printf("Execution ID: %s", response.ExecutionID)
|
||||
log.Printf("Status: %s", response.Status)
|
||||
```
|
||||
|
||||
### Execution Management
|
||||
|
||||
#### Getting Execution Details
|
||||
|
||||
```go
|
||||
execution, err := client.GetExecution(context.Background(), executionID)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
log.Printf("Status: %s", execution.Status)
|
||||
log.Printf("Duration: %v", execution.Duration)
|
||||
if execution.Error != "" {
|
||||
log.Printf("Error: %s", execution.Error)
|
||||
}
|
||||
```
|
||||
|
||||
#### Listing Executions
|
||||
|
||||
```go
|
||||
// List all executions
|
||||
response, err := client.ListExecutions(context.Background(), nil, 50, 0)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
// List executions for a specific function
|
||||
response, err = client.ListExecutions(context.Background(), &functionID, 50, 0)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
for _, exec := range response.Executions {
|
||||
log.Printf("Execution: %s, Status: %s", exec.ID, exec.Status)
|
||||
}
|
||||
```
|
||||
|
||||
#### Canceling an Execution
|
||||
|
||||
```go
|
||||
err := client.CancelExecution(context.Background(), executionID)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
```
|
||||
|
||||
#### Getting Execution Logs
|
||||
|
||||
```go
|
||||
logs, err := client.GetExecutionLogs(context.Background(), executionID)
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
for _, logLine := range logs.Logs {
|
||||
log.Printf("Log: %s", logLine)
|
||||
}
|
||||
```
|
||||
|
||||
#### Getting Running Executions
|
||||
|
||||
```go
|
||||
response, err := client.GetRunningExecutions(context.Background())
|
||||
if err != nil {
|
||||
log.Fatal(err)
|
||||
}
|
||||
|
||||
log.Printf("Running executions: %d", response.Count)
|
||||
for _, exec := range response.Executions {
|
||||
log.Printf("Running: %s (Function: %s)", exec.ID, exec.FunctionID)
|
||||
}
|
||||
```
|
||||
|
||||
## Types
|
||||
|
||||
The client provides comprehensive type definitions that match the FaaS API:
|
||||
|
||||
- `FunctionDefinition` - Complete function metadata
|
||||
- `FunctionExecution` - Execution details and results
|
||||
- `RuntimeType` - Supported runtimes (NodeJS18, Python39, Go120, Custom)
|
||||
- `ExecutionStatus` - Execution states (Pending, Running, Completed, Failed, etc.)
|
||||
- `Owner` - Ownership information
|
||||
- Request/Response types for all operations
|
||||
|
||||
## Error Handling
|
||||
|
||||
The client provides detailed error messages that include HTTP status codes and response bodies:
|
||||
|
||||
```go
|
||||
function, err := client.GetFunction(ctx, nonExistentID)
|
||||
if err != nil {
|
||||
// Error will include status code and details
|
||||
log.Printf("Error: %v", err) // "get function failed with status 404: Function not found"
|
||||
}
|
||||
```
|
||||
|
||||
## Architecture Benefits
|
||||
|
||||
This client package is designed as a lightweight, standalone library that:
|
||||
|
||||
- **No heavy dependencies**: Only requires `google/uuid`
|
||||
- **Zero coupling**: Doesn't import the entire FaaS service
|
||||
- **Modular**: Can be used by any service in your monolith
|
||||
- **Type-safe**: Comprehensive Go types for all API operations
|
||||
- **Flexible auth**: Supports multiple authentication methods
|
||||
@ -1,396 +0,0 @@
|
||||
package faasclient
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"net/http"
|
||||
"net/url"
|
||||
"strconv"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// Client represents the FaaS API client
|
||||
type Client struct {
|
||||
baseURL string
|
||||
httpClient *http.Client
|
||||
authHeader map[string]string
|
||||
}
|
||||
|
||||
// NewClient creates a new FaaS client
|
||||
func NewClient(baseURL string, options ...ClientOption) *Client {
|
||||
client := &Client{
|
||||
baseURL: baseURL,
|
||||
httpClient: http.DefaultClient,
|
||||
authHeader: make(map[string]string),
|
||||
}
|
||||
|
||||
for _, option := range options {
|
||||
option(client)
|
||||
}
|
||||
|
||||
return client
|
||||
}
|
||||
|
||||
// ClientOption represents a configuration option for the client
|
||||
type ClientOption func(*Client)
|
||||
|
||||
// WithHTTPClient sets a custom HTTP client
|
||||
func WithHTTPClient(httpClient *http.Client) ClientOption {
|
||||
return func(c *Client) {
|
||||
c.httpClient = httpClient
|
||||
}
|
||||
}
|
||||
|
||||
// WithAuth sets authentication headers
|
||||
func WithAuth(headers map[string]string) ClientOption {
|
||||
return func(c *Client) {
|
||||
for k, v := range headers {
|
||||
c.authHeader[k] = v
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// WithUserEmail sets the X-User-Email header for authentication
|
||||
func WithUserEmail(email string) ClientOption {
|
||||
return func(c *Client) {
|
||||
c.authHeader["X-User-Email"] = email
|
||||
}
|
||||
}
|
||||
|
||||
// doRequest performs an HTTP request
|
||||
func (c *Client) doRequest(ctx context.Context, method, path string, body interface{}) (*http.Response, error) {
|
||||
var reqBody io.Reader
|
||||
if body != nil {
|
||||
jsonData, err := json.Marshal(body)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal request body: %w", err)
|
||||
}
|
||||
reqBody = bytes.NewBuffer(jsonData)
|
||||
}
|
||||
|
||||
req, err := http.NewRequestWithContext(ctx, method, c.baseURL+path, reqBody)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create request: %w", err)
|
||||
}
|
||||
|
||||
if body != nil {
|
||||
req.Header.Set("Content-Type", "application/json")
|
||||
}
|
||||
|
||||
// Add authentication headers
|
||||
for k, v := range c.authHeader {
|
||||
req.Header.Set(k, v)
|
||||
}
|
||||
|
||||
return c.httpClient.Do(req)
|
||||
}
|
||||
|
||||
// CreateFunction creates a new function
|
||||
func (c *Client) CreateFunction(ctx context.Context, req *CreateFunctionRequest) (*FunctionDefinition, error) {
|
||||
resp, err := c.doRequest(ctx, "POST", "/api/v1/functions", req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusCreated {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("create function failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var function FunctionDefinition
|
||||
if err := json.NewDecoder(resp.Body).Decode(&function); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &function, nil
|
||||
}
|
||||
|
||||
// GetFunction retrieves a function by ID
|
||||
func (c *Client) GetFunction(ctx context.Context, id uuid.UUID) (*FunctionDefinition, error) {
|
||||
resp, err := c.doRequest(ctx, "GET", "/api/v1/functions/"+id.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("get function failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var function FunctionDefinition
|
||||
if err := json.NewDecoder(resp.Body).Decode(&function); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &function, nil
|
||||
}
|
||||
|
||||
// UpdateFunction updates an existing function
|
||||
func (c *Client) UpdateFunction(ctx context.Context, id uuid.UUID, req *UpdateFunctionRequest) (*FunctionDefinition, error) {
|
||||
resp, err := c.doRequest(ctx, "PUT", "/api/v1/functions/"+id.String(), req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("update function failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var function FunctionDefinition
|
||||
if err := json.NewDecoder(resp.Body).Decode(&function); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &function, nil
|
||||
}
|
||||
|
||||
// DeleteFunction deletes a function
|
||||
func (c *Client) DeleteFunction(ctx context.Context, id uuid.UUID) error {
|
||||
resp, err := c.doRequest(ctx, "DELETE", "/api/v1/functions/"+id.String(), nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf("delete function failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// ListFunctions lists functions with optional filtering
|
||||
func (c *Client) ListFunctions(ctx context.Context, appID string, limit, offset int) (*ListFunctionsResponse, error) {
|
||||
params := url.Values{}
|
||||
if appID != "" {
|
||||
params.Set("app_id", appID)
|
||||
}
|
||||
if limit > 0 {
|
||||
params.Set("limit", strconv.Itoa(limit))
|
||||
}
|
||||
if offset > 0 {
|
||||
params.Set("offset", strconv.Itoa(offset))
|
||||
}
|
||||
|
||||
path := "/api/v1/functions"
|
||||
if len(params) > 0 {
|
||||
path += "?" + params.Encode()
|
||||
}
|
||||
|
||||
resp, err := c.doRequest(ctx, "GET", path, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("list functions failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var response ListFunctionsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
|
||||
// DeployFunction deploys a function
|
||||
func (c *Client) DeployFunction(ctx context.Context, id uuid.UUID, req *DeployFunctionRequest) (*DeployFunctionResponse, error) {
|
||||
if req == nil {
|
||||
req = &DeployFunctionRequest{FunctionID: id}
|
||||
}
|
||||
req.FunctionID = id
|
||||
|
||||
resp, err := c.doRequest(ctx, "POST", "/api/v1/functions/"+id.String()+"/deploy", req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("deploy function failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var response DeployFunctionResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
|
||||
// ExecuteFunction executes a function synchronously or asynchronously
|
||||
func (c *Client) ExecuteFunction(ctx context.Context, req *ExecuteFunctionRequest) (*ExecuteFunctionResponse, error) {
|
||||
resp, err := c.doRequest(ctx, "POST", "/api/v1/functions/"+req.FunctionID.String()+"/execute", req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("execute function failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var response ExecuteFunctionResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
|
||||
// InvokeFunction invokes a function asynchronously
|
||||
func (c *Client) InvokeFunction(ctx context.Context, functionID uuid.UUID, input json.RawMessage) (*ExecuteFunctionResponse, error) {
|
||||
req := &ExecuteFunctionRequest{
|
||||
FunctionID: functionID,
|
||||
Input: input,
|
||||
Async: true,
|
||||
}
|
||||
|
||||
resp, err := c.doRequest(ctx, "POST", "/api/v1/functions/"+functionID.String()+"/invoke", req)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusAccepted {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("invoke function failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var response ExecuteFunctionResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
|
||||
// GetExecution retrieves an execution by ID
|
||||
func (c *Client) GetExecution(ctx context.Context, id uuid.UUID) (*FunctionExecution, error) {
|
||||
resp, err := c.doRequest(ctx, "GET", "/api/v1/executions/"+id.String(), nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("get execution failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var execution FunctionExecution
|
||||
if err := json.NewDecoder(resp.Body).Decode(&execution); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &execution, nil
|
||||
}
|
||||
|
||||
// ListExecutions lists executions with optional filtering
|
||||
func (c *Client) ListExecutions(ctx context.Context, functionID *uuid.UUID, limit, offset int) (*ListExecutionsResponse, error) {
|
||||
params := url.Values{}
|
||||
if functionID != nil {
|
||||
params.Set("function_id", functionID.String())
|
||||
}
|
||||
if limit > 0 {
|
||||
params.Set("limit", strconv.Itoa(limit))
|
||||
}
|
||||
if offset > 0 {
|
||||
params.Set("offset", strconv.Itoa(offset))
|
||||
}
|
||||
|
||||
path := "/api/v1/executions"
|
||||
if len(params) > 0 {
|
||||
path += "?" + params.Encode()
|
||||
}
|
||||
|
||||
resp, err := c.doRequest(ctx, "GET", path, nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("list executions failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var response ListExecutionsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
|
||||
// CancelExecution cancels a running execution
|
||||
func (c *Client) CancelExecution(ctx context.Context, id uuid.UUID) error {
|
||||
resp, err := c.doRequest(ctx, "POST", "/api/v1/executions/"+id.String()+"/cancel", nil)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return fmt.Errorf("cancel execution failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
// GetExecutionLogs retrieves logs for an execution
|
||||
func (c *Client) GetExecutionLogs(ctx context.Context, id uuid.UUID) (*GetLogsResponse, error) {
|
||||
resp, err := c.doRequest(ctx, "GET", "/api/v1/executions/"+id.String()+"/logs", nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("get execution logs failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var response GetLogsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
|
||||
// GetRunningExecutions retrieves all currently running executions
|
||||
func (c *Client) GetRunningExecutions(ctx context.Context) (*GetRunningExecutionsResponse, error) {
|
||||
resp, err := c.doRequest(ctx, "GET", "/api/v1/executions/running", nil)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer resp.Body.Close()
|
||||
|
||||
if resp.StatusCode != http.StatusOK {
|
||||
body, _ := io.ReadAll(resp.Body)
|
||||
return nil, fmt.Errorf("get running executions failed with status %d: %s", resp.StatusCode, string(body))
|
||||
}
|
||||
|
||||
var response GetRunningExecutionsResponse
|
||||
if err := json.NewDecoder(resp.Body).Decode(&response); err != nil {
|
||||
return nil, fmt.Errorf("failed to decode response: %w", err)
|
||||
}
|
||||
|
||||
return &response, nil
|
||||
}
|
||||
@ -1,148 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"log"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
faasclient "github.com/RyanCopley/skybridge/faas-client"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Create FaaS client
|
||||
client := faasclient.NewClient(
|
||||
"http://localhost:8080",
|
||||
faasclient.WithUserEmail("admin@example.com"),
|
||||
)
|
||||
|
||||
ctx := context.Background()
|
||||
|
||||
// Create a simple Node.js function
|
||||
log.Println("Creating function...")
|
||||
createReq := &faasclient.CreateFunctionRequest{
|
||||
Name: "hello-world-example",
|
||||
AppID: "example-app",
|
||||
Runtime: faasclient.RuntimeNodeJS18,
|
||||
Handler: "index.handler",
|
||||
Code: "exports.handler = async (event) => { return { message: 'Hello, ' + (event.name || 'World') + '!' }; }",
|
||||
Environment: map[string]string{
|
||||
"NODE_ENV": "production",
|
||||
},
|
||||
Timeout: faasclient.Duration(30 * time.Second),
|
||||
Memory: 256,
|
||||
Owner: faasclient.Owner{
|
||||
Type: faasclient.OwnerTypeIndividual,
|
||||
Name: "Example User",
|
||||
Owner: "admin@example.com",
|
||||
},
|
||||
}
|
||||
|
||||
function, err := client.CreateFunction(ctx, createReq)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to create function: %v", err)
|
||||
}
|
||||
log.Printf("Created function: %s (ID: %s)", function.Name, function.ID)
|
||||
|
||||
// Deploy the function
|
||||
log.Println("Deploying function...")
|
||||
deployResp, err := client.DeployFunction(ctx, function.ID, nil)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to deploy function: %v", err)
|
||||
}
|
||||
log.Printf("Deployment status: %s - %s", deployResp.Status, deployResp.Message)
|
||||
|
||||
// Execute function synchronously
|
||||
log.Println("Executing function synchronously...")
|
||||
input := json.RawMessage(`{"name": "Skybridge"}`)
|
||||
executeReq := &faasclient.ExecuteFunctionRequest{
|
||||
FunctionID: function.ID,
|
||||
Input: input,
|
||||
Async: false,
|
||||
}
|
||||
|
||||
execResp, err := client.ExecuteFunction(ctx, executeReq)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to execute function: %v", err)
|
||||
}
|
||||
log.Printf("Sync execution result: %s", string(execResp.Output))
|
||||
log.Printf("Duration: %v, Memory used: %d MB", execResp.Duration, execResp.MemoryUsed)
|
||||
|
||||
// Execute function asynchronously
|
||||
log.Println("Invoking function asynchronously...")
|
||||
asyncResp, err := client.InvokeFunction(ctx, function.ID, input)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to invoke function: %v", err)
|
||||
}
|
||||
log.Printf("Async execution ID: %s, Status: %s", asyncResp.ExecutionID, asyncResp.Status)
|
||||
|
||||
// Wait a moment for async execution to complete
|
||||
time.Sleep(2 * time.Second)
|
||||
|
||||
// Get execution details
|
||||
log.Println("Getting execution details...")
|
||||
execution, err := client.GetExecution(ctx, asyncResp.ExecutionID)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to get execution: %v", err)
|
||||
}
|
||||
log.Printf("Execution status: %s", execution.Status)
|
||||
if execution.Status == faasclient.StatusCompleted {
|
||||
log.Printf("Async execution result: %s", string(execution.Output))
|
||||
log.Printf("Duration: %v, Memory used: %d MB", execution.Duration, execution.MemoryUsed)
|
||||
}
|
||||
|
||||
// Get execution logs
|
||||
log.Println("Getting execution logs...")
|
||||
logs, err := client.GetExecutionLogs(ctx, asyncResp.ExecutionID)
|
||||
if err != nil {
|
||||
log.Printf("Failed to get logs: %v", err)
|
||||
} else {
|
||||
log.Printf("Logs (%d entries):", len(logs.Logs))
|
||||
for _, logLine := range logs.Logs {
|
||||
log.Printf(" %s", logLine)
|
||||
}
|
||||
}
|
||||
|
||||
// List functions
|
||||
log.Println("Listing functions...")
|
||||
listResp, err := client.ListFunctions(ctx, "example-app", 10, 0)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to list functions: %v", err)
|
||||
}
|
||||
log.Printf("Found %d functions:", len(listResp.Functions))
|
||||
for _, fn := range listResp.Functions {
|
||||
log.Printf(" - %s (%s) - Runtime: %s", fn.Name, fn.ID, fn.Runtime)
|
||||
}
|
||||
|
||||
// List executions for this function
|
||||
log.Println("Listing executions...")
|
||||
execListResp, err := client.ListExecutions(ctx, &function.ID, 10, 0)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to list executions: %v", err)
|
||||
}
|
||||
log.Printf("Found %d executions:", len(execListResp.Executions))
|
||||
for _, exec := range execListResp.Executions {
|
||||
status := string(exec.Status)
|
||||
log.Printf(" - %s: %s (Duration: %v)", exec.ID, status, exec.Duration)
|
||||
}
|
||||
|
||||
// Clean up - delete the function
|
||||
log.Println("Cleaning up...")
|
||||
err = client.DeleteFunction(ctx, function.ID)
|
||||
if err != nil {
|
||||
log.Fatalf("Failed to delete function: %v", err)
|
||||
}
|
||||
log.Printf("Deleted function: %s", function.ID)
|
||||
|
||||
log.Println("Example completed successfully!")
|
||||
}
|
||||
|
||||
// Helper function to create a UUID from string (for testing)
|
||||
func mustParseUUID(s string) uuid.UUID {
|
||||
id, err := uuid.Parse(s)
|
||||
if err != nil {
|
||||
panic(err)
|
||||
}
|
||||
return id
|
||||
}
|
||||
@ -1,7 +0,0 @@
|
||||
module github.com/RyanCopley/skybridge/faas-client
|
||||
|
||||
go 1.23
|
||||
|
||||
require (
|
||||
github.com/google/uuid v1.6.0
|
||||
)
|
||||
@ -1,191 +0,0 @@
|
||||
package faasclient
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// RuntimeType represents supported function runtimes
|
||||
type RuntimeType string
|
||||
|
||||
const (
|
||||
RuntimeNodeJS18 RuntimeType = "nodejs18"
|
||||
RuntimePython39 RuntimeType = "python3.9"
|
||||
RuntimeGo120 RuntimeType = "go1.20"
|
||||
RuntimeCustom RuntimeType = "custom"
|
||||
)
|
||||
|
||||
// ExecutionStatus represents the status of function execution
|
||||
type ExecutionStatus string
|
||||
|
||||
const (
|
||||
StatusPending ExecutionStatus = "pending"
|
||||
StatusRunning ExecutionStatus = "running"
|
||||
StatusCompleted ExecutionStatus = "completed"
|
||||
StatusFailed ExecutionStatus = "failed"
|
||||
StatusTimeout ExecutionStatus = "timeout"
|
||||
StatusCanceled ExecutionStatus = "canceled"
|
||||
)
|
||||
|
||||
// OwnerType represents the type of owner
|
||||
type OwnerType string
|
||||
|
||||
const (
|
||||
OwnerTypeIndividual OwnerType = "individual"
|
||||
OwnerTypeTeam OwnerType = "team"
|
||||
)
|
||||
|
||||
// Owner represents ownership information
|
||||
type Owner struct {
|
||||
Type OwnerType `json:"type"`
|
||||
Name string `json:"name"`
|
||||
Owner string `json:"owner"`
|
||||
}
|
||||
|
||||
// Duration wraps time.Duration for JSON marshaling
|
||||
type Duration time.Duration
|
||||
|
||||
func (d Duration) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(time.Duration(d).String())
|
||||
}
|
||||
|
||||
func (d *Duration) UnmarshalJSON(b []byte) error {
|
||||
var v interface{}
|
||||
if err := json.Unmarshal(b, &v); err != nil {
|
||||
return err
|
||||
}
|
||||
switch value := v.(type) {
|
||||
case float64:
|
||||
*d = Duration(time.Duration(value))
|
||||
return nil
|
||||
case string:
|
||||
tmp, err := time.ParseDuration(value)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
*d = Duration(tmp)
|
||||
return nil
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// FunctionDefinition represents a serverless function
|
||||
type FunctionDefinition struct {
|
||||
ID uuid.UUID `json:"id"`
|
||||
Name string `json:"name"`
|
||||
AppID string `json:"app_id"`
|
||||
Runtime RuntimeType `json:"runtime"`
|
||||
Image string `json:"image"`
|
||||
Handler string `json:"handler"`
|
||||
Code string `json:"code,omitempty"`
|
||||
Environment map[string]string `json:"environment,omitempty"`
|
||||
Timeout Duration `json:"timeout"`
|
||||
Memory int `json:"memory"`
|
||||
Owner Owner `json:"owner"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at"`
|
||||
}
|
||||
|
||||
// FunctionExecution represents a function execution
|
||||
type FunctionExecution struct {
|
||||
ID uuid.UUID `json:"id"`
|
||||
FunctionID uuid.UUID `json:"function_id"`
|
||||
Status ExecutionStatus `json:"status"`
|
||||
Input json.RawMessage `json:"input,omitempty"`
|
||||
Output json.RawMessage `json:"output,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
MemoryUsed int `json:"memory_used"`
|
||||
Logs []string `json:"logs,omitempty"`
|
||||
ContainerID string `json:"container_id,omitempty"`
|
||||
ExecutorID string `json:"executor_id"`
|
||||
CreatedAt time.Time `json:"created_at"`
|
||||
StartedAt *time.Time `json:"started_at,omitempty"`
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty"`
|
||||
}
|
||||
|
||||
// CreateFunctionRequest represents a request to create a new function
|
||||
type CreateFunctionRequest struct {
|
||||
Name string `json:"name"`
|
||||
AppID string `json:"app_id"`
|
||||
Runtime RuntimeType `json:"runtime"`
|
||||
Image string `json:"image"`
|
||||
Handler string `json:"handler"`
|
||||
Code string `json:"code,omitempty"`
|
||||
Environment map[string]string `json:"environment,omitempty"`
|
||||
Timeout Duration `json:"timeout"`
|
||||
Memory int `json:"memory"`
|
||||
Owner Owner `json:"owner"`
|
||||
}
|
||||
|
||||
// UpdateFunctionRequest represents a request to update an existing function
|
||||
type UpdateFunctionRequest struct {
|
||||
Name *string `json:"name,omitempty"`
|
||||
Runtime *RuntimeType `json:"runtime,omitempty"`
|
||||
Image *string `json:"image,omitempty"`
|
||||
Handler *string `json:"handler,omitempty"`
|
||||
Code *string `json:"code,omitempty"`
|
||||
Environment map[string]string `json:"environment,omitempty"`
|
||||
Timeout *Duration `json:"timeout,omitempty"`
|
||||
Memory *int `json:"memory,omitempty"`
|
||||
Owner *Owner `json:"owner,omitempty"`
|
||||
}
|
||||
|
||||
// ExecuteFunctionRequest represents a request to execute a function
|
||||
type ExecuteFunctionRequest struct {
|
||||
FunctionID uuid.UUID `json:"function_id"`
|
||||
Input json.RawMessage `json:"input,omitempty"`
|
||||
Async bool `json:"async,omitempty"`
|
||||
}
|
||||
|
||||
// ExecuteFunctionResponse represents a response for function execution
|
||||
type ExecuteFunctionResponse struct {
|
||||
ExecutionID uuid.UUID `json:"execution_id"`
|
||||
Status ExecutionStatus `json:"status"`
|
||||
Output json.RawMessage `json:"output,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Duration time.Duration `json:"duration,omitempty"`
|
||||
MemoryUsed int `json:"memory_used,omitempty"`
|
||||
}
|
||||
|
||||
// DeployFunctionRequest represents a request to deploy a function
|
||||
type DeployFunctionRequest struct {
|
||||
FunctionID uuid.UUID `json:"function_id"`
|
||||
Force bool `json:"force,omitempty"`
|
||||
}
|
||||
|
||||
// DeployFunctionResponse represents a response for function deployment
|
||||
type DeployFunctionResponse struct {
|
||||
Status string `json:"status"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Image string `json:"image,omitempty"`
|
||||
ImageID string `json:"image_id,omitempty"`
|
||||
}
|
||||
|
||||
// ListFunctionsResponse represents the response for listing functions
|
||||
type ListFunctionsResponse struct {
|
||||
Functions []FunctionDefinition `json:"functions"`
|
||||
Limit int `json:"limit"`
|
||||
Offset int `json:"offset"`
|
||||
}
|
||||
|
||||
// ListExecutionsResponse represents the response for listing executions
|
||||
type ListExecutionsResponse struct {
|
||||
Executions []FunctionExecution `json:"executions"`
|
||||
Limit int `json:"limit"`
|
||||
Offset int `json:"offset"`
|
||||
}
|
||||
|
||||
// GetLogsResponse represents the response for getting execution logs
|
||||
type GetLogsResponse struct {
|
||||
Logs []string `json:"logs"`
|
||||
}
|
||||
|
||||
// GetRunningExecutionsResponse represents the response for getting running executions
|
||||
type GetRunningExecutionsResponse struct {
|
||||
Executions []FunctionExecution `json:"executions"`
|
||||
Count int `json:"count"`
|
||||
}
|
||||
1
faas/.gitignore
vendored
1
faas/.gitignore
vendored
@ -1 +0,0 @@
|
||||
server
|
||||
@ -1,38 +0,0 @@
|
||||
# Build stage
|
||||
FROM docker.io/golang:1.23-alpine AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Install dependencies
|
||||
RUN apk add --no-cache git
|
||||
|
||||
# Copy go.mod and go.sum
|
||||
COPY go.mod go.sum ./
|
||||
|
||||
# Download dependencies
|
||||
RUN go mod download
|
||||
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
# Build the application
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o faas-server ./cmd/server
|
||||
|
||||
# Final stage
|
||||
FROM docker.io/alpine:latest
|
||||
|
||||
RUN apk --no-cache add ca-certificates
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy the binary
|
||||
COPY --from=builder /app/faas-server .
|
||||
|
||||
# Copy migrations
|
||||
COPY --from=builder /app/migrations ./migrations
|
||||
|
||||
# Expose port
|
||||
EXPOSE 8082 9091
|
||||
|
||||
# Run the application
|
||||
CMD ["./faas-server"]
|
||||
@ -1,133 +0,0 @@
|
||||
# Skybridge FaaS Implementation Guide
|
||||
|
||||
This document explains the implementation of the Function-as-a-Service (FaaS) component in Skybridge, specifically focusing on the Docker runtime implementation that replaced the original mock implementation.
|
||||
|
||||
## Overview
|
||||
|
||||
The Skybridge FaaS platform allows users to deploy and execute functions in isolated containers. The implementation consists of several key components:
|
||||
|
||||
1. **Function Management**: CRUD operations for function definitions
|
||||
2. **Execution Engine**: Runtime backend for executing functions
|
||||
3. **Repository Layer**: Data persistence for functions and executions
|
||||
4. **Services Layer**: Business logic implementation
|
||||
5. **API Layer**: RESTful interface for managing functions
|
||||
|
||||
## Docker Runtime Implementation
|
||||
|
||||
The original implementation contained a mock Docker runtime (`faas/internal/runtime/docker/simple.go`) that didn't actually interact with Docker. The new implementation provides real container execution capabilities.
|
||||
|
||||
### Key Features Implemented
|
||||
|
||||
1. **Real Docker Client Integration**: Uses the official Docker client library to communicate with the Docker daemon
|
||||
2. **Container Lifecycle Management**: Creates, starts, waits for, and cleans up containers
|
||||
3. **Image Management**: Pulls images when they don't exist locally
|
||||
4. **Resource Limiting**: Applies memory limits to containers
|
||||
5. **Input/Output Handling**: Passes input to functions and captures output
|
||||
6. **Logging**: Retrieves container logs for debugging
|
||||
7. **Health Checks**: Verifies Docker daemon connectivity
|
||||
|
||||
### Implementation Details
|
||||
|
||||
#### Container Creation
|
||||
|
||||
The `createContainer` method creates a Docker container with the following configuration:
|
||||
|
||||
- **Environment Variables**: Function environment variables plus input data
|
||||
- **Resource Limits**: Memory limits based on function configuration
|
||||
- **Attached Streams**: STDOUT and STDERR for log capture
|
||||
|
||||
#### Function Execution Flow
|
||||
|
||||
1. **Container Creation**: Creates a container from the function's Docker image
|
||||
2. **Container Start**: Starts the container execution
|
||||
3. **Wait for Completion**: Waits for the container to finish execution
|
||||
4. **Result Collection**: Gathers output, logs, and execution metadata
|
||||
5. **Cleanup**: Removes the container to free resources
|
||||
|
||||
#### Error Handling
|
||||
|
||||
The implementation includes comprehensive error handling:
|
||||
|
||||
- **Connection Errors**: Handles Docker daemon connectivity issues
|
||||
- **Container Errors**: Manages container creation and execution failures
|
||||
- **Resource Errors**: Handles resource constraint violations
|
||||
- **Graceful Cleanup**: Ensures containers are cleaned up even on failures
|
||||
|
||||
## Testing
|
||||
|
||||
### Unit Tests
|
||||
|
||||
Unit tests are located in `faas/test/integration/` and cover:
|
||||
|
||||
- Docker runtime health checks
|
||||
- Container creation and execution
|
||||
- Error conditions
|
||||
|
||||
### Example Function
|
||||
|
||||
An example "Hello World" function is provided in `faas/examples/hello-world/` to demonstrate:
|
||||
|
||||
- Function structure and implementation
|
||||
- Docker image creation
|
||||
- Local testing
|
||||
- Deployment to Skybridge FaaS
|
||||
|
||||
## Deployment
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. Docker daemon running and accessible
|
||||
2. Docker socket mounted to the FaaS service container (as shown in `docker-compose.yml`)
|
||||
3. Required permissions to access Docker
|
||||
|
||||
### Configuration
|
||||
|
||||
The FaaS service reads configuration from environment variables:
|
||||
|
||||
- `FAAS_DEFAULT_RUNTIME`: Should be set to "docker"
|
||||
- Docker socket path: Typically `/var/run/docker.sock`
|
||||
|
||||
## Security Considerations
|
||||
|
||||
The current implementation has basic security features:
|
||||
|
||||
- **Container Isolation**: Functions run in isolated containers
|
||||
- **Resource Limits**: Prevents resource exhaustion
|
||||
- **Image Verification**: Only pulls trusted images
|
||||
|
||||
For production use, consider implementing:
|
||||
|
||||
- Container user restrictions
|
||||
- Network isolation
|
||||
- Enhanced logging and monitoring
|
||||
- Authentication and authorization for Docker operations
|
||||
|
||||
## Performance Optimizations
|
||||
|
||||
Potential performance improvements include:
|
||||
|
||||
- **Image Caching**: Pre-pull commonly used images
|
||||
- **Container Pooling**: Maintain a pool of ready containers
|
||||
- **Parallel Execution**: Optimize concurrent function execution
|
||||
- **Resource Monitoring**: Track and optimize resource usage
|
||||
|
||||
## Future Enhancements
|
||||
|
||||
Planned enhancements include:
|
||||
|
||||
1. **Multiple Runtime Support**: Add support for Podman and other container runtimes
|
||||
2. **Advanced Resource Management**: CPU quotas, disk limits
|
||||
3. **Enhanced Monitoring**: Detailed metrics and tracing
|
||||
4. **Improved Error Handling**: More granular error reporting
|
||||
5. **Security Hardening**: Additional security measures for container execution
|
||||
|
||||
## API Usage
|
||||
|
||||
The FaaS API provides endpoints for:
|
||||
|
||||
- **Function Management**: Create, read, update, delete functions
|
||||
- **Deployment**: Deploy functions to prepare for execution
|
||||
- **Execution**: Execute functions synchronously or asynchronously
|
||||
- **Monitoring**: View execution status, logs, and metrics
|
||||
|
||||
Refer to the API documentation endpoint (`/api/docs`) for detailed information.
|
||||
@ -1,281 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"log"
|
||||
"net/http"
|
||||
"os"
|
||||
"os/signal"
|
||||
"strings"
|
||||
"syscall"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/config"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/database"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/handlers"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/repository/postgres"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/services"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Initialize configuration
|
||||
cfg := config.NewConfig()
|
||||
if err := cfg.Validate(); err != nil {
|
||||
log.Fatal("Configuration validation failed:", err)
|
||||
}
|
||||
|
||||
// Initialize logger
|
||||
logger := initLogger(cfg)
|
||||
defer logger.Sync()
|
||||
|
||||
logger.Info("Starting Function-as-a-Service",
|
||||
zap.String("version", "1.0.0"),
|
||||
zap.String("environment", cfg.GetString("FAAS_APP_ENV")),
|
||||
)
|
||||
|
||||
// Initialize database
|
||||
logger.Info("Connecting to database",
|
||||
zap.String("dsn", cfg.GetDatabaseDSNForLogging()))
|
||||
|
||||
db, err := database.NewPostgresProvider(
|
||||
cfg.GetDatabaseDSN(),
|
||||
cfg.GetInt("DB_MAX_OPEN_CONNS"),
|
||||
cfg.GetInt("DB_MAX_IDLE_CONNS"),
|
||||
cfg.GetString("DB_CONN_MAX_LIFETIME"),
|
||||
logger,
|
||||
)
|
||||
if err != nil {
|
||||
logger.Fatal("Failed to initialize database",
|
||||
zap.String("dsn", cfg.GetDatabaseDSNForLogging()),
|
||||
zap.Error(err))
|
||||
}
|
||||
|
||||
logger.Info("Database connection established successfully")
|
||||
|
||||
// Initialize repositories
|
||||
functionRepo := postgres.NewFunctionRepository(db, logger)
|
||||
executionRepo := postgres.NewExecutionRepository(db, logger)
|
||||
|
||||
// Initialize services
|
||||
runtimeService := services.NewRuntimeService(logger, nil)
|
||||
functionService := services.NewFunctionService(functionRepo, runtimeService, logger)
|
||||
executionService := services.NewExecutionService(executionRepo, functionRepo, runtimeService, logger)
|
||||
authService := services.NewAuthService(logger) // Mock auth service for now
|
||||
|
||||
// Initialize handlers
|
||||
healthHandler := handlers.NewHealthHandler(db, logger)
|
||||
functionHandler := handlers.NewFunctionHandler(functionService, authService, logger)
|
||||
executionHandler := handlers.NewExecutionHandler(executionService, authService, logger)
|
||||
|
||||
// Set up router
|
||||
router := setupRouter(cfg, logger, healthHandler, functionHandler, executionHandler)
|
||||
|
||||
// Create HTTP server
|
||||
srv := &http.Server{
|
||||
Addr: cfg.GetServerAddress(),
|
||||
Handler: router,
|
||||
ReadTimeout: cfg.GetDuration("SERVER_READ_TIMEOUT"),
|
||||
WriteTimeout: cfg.GetDuration("SERVER_WRITE_TIMEOUT"),
|
||||
IdleTimeout: cfg.GetDuration("SERVER_IDLE_TIMEOUT"),
|
||||
}
|
||||
|
||||
// Start server in goroutine
|
||||
go func() {
|
||||
logger.Info("Starting HTTP server", zap.String("address", srv.Addr))
|
||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
logger.Fatal("Failed to start server", zap.Error(err))
|
||||
}
|
||||
}()
|
||||
|
||||
// Start metrics server if enabled
|
||||
var metricsSrv *http.Server
|
||||
if cfg.GetBool("METRICS_ENABLED") {
|
||||
metricsSrv = startMetricsServer(cfg, logger)
|
||||
}
|
||||
|
||||
// Wait for interrupt signal to gracefully shutdown the server
|
||||
quit := make(chan os.Signal, 1)
|
||||
signal.Notify(quit, syscall.SIGINT, syscall.SIGTERM)
|
||||
<-quit
|
||||
|
||||
logger.Info("Shutting down server...")
|
||||
|
||||
// Give outstanding requests time to complete
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
|
||||
defer cancel()
|
||||
|
||||
// Shutdown main server
|
||||
if err := srv.Shutdown(ctx); err != nil {
|
||||
logger.Error("Server forced to shutdown", zap.Error(err))
|
||||
}
|
||||
|
||||
// Shutdown metrics server
|
||||
if metricsSrv != nil {
|
||||
if err := metricsSrv.Shutdown(ctx); err != nil {
|
||||
logger.Error("Metrics server forced to shutdown", zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
logger.Info("Server exited")
|
||||
}
|
||||
|
||||
func initLogger(cfg config.ConfigProvider) *zap.Logger {
|
||||
var logger *zap.Logger
|
||||
var err error
|
||||
|
||||
logLevel := cfg.GetString("FAAS_LOG_LEVEL")
|
||||
|
||||
if cfg.IsProduction() && logLevel != "debug" {
|
||||
logger, err = zap.NewProduction()
|
||||
} else {
|
||||
// Use development logger for non-production or when debug is explicitly requested
|
||||
config := zap.NewDevelopmentConfig()
|
||||
|
||||
// Set log level based on environment variable
|
||||
switch strings.ToLower(logLevel) {
|
||||
case "debug":
|
||||
config.Level = zap.NewAtomicLevelAt(zap.DebugLevel)
|
||||
case "info":
|
||||
config.Level = zap.NewAtomicLevelAt(zap.InfoLevel)
|
||||
case "warn":
|
||||
config.Level = zap.NewAtomicLevelAt(zap.WarnLevel)
|
||||
case "error":
|
||||
config.Level = zap.NewAtomicLevelAt(zap.ErrorLevel)
|
||||
default:
|
||||
config.Level = zap.NewAtomicLevelAt(zap.DebugLevel) // Default to debug for development
|
||||
}
|
||||
|
||||
logger, err = config.Build()
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
log.Fatal("Failed to initialize logger:", err)
|
||||
}
|
||||
|
||||
return logger
|
||||
}
|
||||
|
||||
func setupRouter(cfg config.ConfigProvider, logger *zap.Logger, healthHandler *handlers.HealthHandler, functionHandler *handlers.FunctionHandler, executionHandler *handlers.ExecutionHandler) *gin.Engine {
|
||||
// Set Gin mode based on environment
|
||||
if cfg.IsProduction() {
|
||||
gin.SetMode(gin.ReleaseMode)
|
||||
}
|
||||
|
||||
router := gin.New()
|
||||
|
||||
// Add middleware
|
||||
router.Use(gin.Logger())
|
||||
router.Use(gin.Recovery())
|
||||
|
||||
// CORS middleware
|
||||
router.Use(func(c *gin.Context) {
|
||||
c.Header("Access-Control-Allow-Origin", "*")
|
||||
c.Header("Access-Control-Allow-Methods", "GET, POST, PUT, DELETE, OPTIONS")
|
||||
c.Header("Access-Control-Allow-Headers", "Content-Type, Authorization, X-User-Email")
|
||||
|
||||
if c.Request.Method == "OPTIONS" {
|
||||
c.AbortWithStatus(http.StatusNoContent)
|
||||
return
|
||||
}
|
||||
|
||||
c.Next()
|
||||
})
|
||||
|
||||
// Health check endpoints (no authentication required)
|
||||
router.GET("/health", healthHandler.Health)
|
||||
router.GET("/ready", healthHandler.Ready)
|
||||
|
||||
// API routes
|
||||
api := router.Group("/api")
|
||||
{
|
||||
// Function Management
|
||||
api.GET("/functions", functionHandler.List)
|
||||
api.POST("/functions", functionHandler.Create)
|
||||
api.GET("/functions/:id", functionHandler.GetByID)
|
||||
api.PUT("/functions/:id", functionHandler.Update)
|
||||
api.DELETE("/functions/:id", functionHandler.Delete)
|
||||
api.POST("/functions/:id/deploy", functionHandler.Deploy)
|
||||
|
||||
// Function Execution
|
||||
api.POST("/functions/:id/execute", executionHandler.Execute)
|
||||
api.POST("/functions/:id/invoke", executionHandler.Invoke)
|
||||
|
||||
// Execution Management
|
||||
api.GET("/executions", executionHandler.List)
|
||||
api.GET("/executions/:id", executionHandler.GetByID)
|
||||
api.DELETE("/executions/:id", executionHandler.Cancel)
|
||||
api.GET("/executions/:id/logs", executionHandler.GetLogs)
|
||||
api.GET("/executions/running", executionHandler.GetRunning)
|
||||
|
||||
// Runtime information endpoint
|
||||
api.GET("/runtimes", func(c *gin.Context) {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"runtimes": domain.GetAvailableRuntimes(),
|
||||
})
|
||||
})
|
||||
|
||||
// Documentation endpoint
|
||||
api.GET("/docs", func(c *gin.Context) {
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"service": "Function-as-a-Service",
|
||||
"version": "1.0.0",
|
||||
"documentation": "FaaS API Documentation",
|
||||
"endpoints": map[string]interface{}{
|
||||
"functions": []string{
|
||||
"GET /api/functions",
|
||||
"POST /api/functions",
|
||||
"GET /api/functions/:id",
|
||||
"PUT /api/functions/:id",
|
||||
"DELETE /api/functions/:id",
|
||||
"POST /api/functions/:id/deploy",
|
||||
},
|
||||
"executions": []string{
|
||||
"POST /api/functions/:id/execute",
|
||||
"POST /api/functions/:id/invoke",
|
||||
"GET /api/executions",
|
||||
"GET /api/executions/:id",
|
||||
"DELETE /api/executions/:id",
|
||||
"GET /api/executions/:id/logs",
|
||||
"GET /api/executions/running",
|
||||
},
|
||||
},
|
||||
})
|
||||
})
|
||||
}
|
||||
|
||||
return router
|
||||
}
|
||||
|
||||
func startMetricsServer(cfg config.ConfigProvider, logger *zap.Logger) *http.Server {
|
||||
mux := http.NewServeMux()
|
||||
|
||||
// Health endpoint for metrics server
|
||||
mux.HandleFunc("/health", func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Write([]byte("OK"))
|
||||
})
|
||||
|
||||
// Metrics endpoint would go here
|
||||
mux.HandleFunc("/metrics", func(w http.ResponseWriter, r *http.Request) {
|
||||
w.WriteHeader(http.StatusOK)
|
||||
w.Write([]byte("# FaaS metrics placeholder\n"))
|
||||
})
|
||||
|
||||
srv := &http.Server{
|
||||
Addr: cfg.GetString("FAAS_SERVER_HOST") + ":" + cfg.GetString("METRICS_PORT"),
|
||||
Handler: mux,
|
||||
}
|
||||
|
||||
go func() {
|
||||
logger.Info("Starting metrics server", zap.String("address", srv.Addr))
|
||||
if err := srv.ListenAndServe(); err != nil && err != http.ErrServerClosed {
|
||||
logger.Error("Failed to start metrics server", zap.Error(err))
|
||||
}
|
||||
}()
|
||||
|
||||
return srv
|
||||
}
|
||||
@ -1,93 +0,0 @@
|
||||
version: '3.8'
|
||||
|
||||
services:
|
||||
faas-postgres:
|
||||
image: docker.io/library/postgres:15-alpine
|
||||
container_name: faas-postgres
|
||||
environment:
|
||||
POSTGRES_DB: faas
|
||||
POSTGRES_USER: postgres
|
||||
POSTGRES_PASSWORD: postgres
|
||||
ports:
|
||||
- "5434:5432"
|
||||
volumes:
|
||||
- faas_postgres_data:/var/lib/postgresql/data
|
||||
- ./migrations:/docker-entrypoint-initdb.d:Z
|
||||
networks:
|
||||
- faas-network
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U postgres -d faas"]
|
||||
interval: 10s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
|
||||
faas-api-service:
|
||||
build:
|
||||
context: .
|
||||
dockerfile: Dockerfile
|
||||
container_name: faas-api-service
|
||||
# user: "1000:1000" # Run as root to access Podman socket properly
|
||||
environment:
|
||||
FAAS_APP_ENV: development
|
||||
FAAS_DB_HOST: faas-postgres
|
||||
FAAS_DB_PORT: 5432
|
||||
FAAS_DB_NAME: faas
|
||||
FAAS_DB_USER: postgres
|
||||
FAAS_DB_PASSWORD: postgres
|
||||
FAAS_DB_SSLMODE: disable
|
||||
DB_CONN_MAX_LIFETIME: 5m
|
||||
DB_MAX_OPEN_CONNS: 25
|
||||
DB_MAX_IDLE_CONNS: 5
|
||||
FAAS_SERVER_HOST: 0.0.0.0
|
||||
FAAS_SERVER_PORT: 8083
|
||||
FAAS_LOG_LEVEL: debug
|
||||
FAAS_DEFAULT_RUNTIME: docker
|
||||
FAAS_FUNCTION_TIMEOUT: 300s
|
||||
FAAS_MAX_MEMORY: 3008
|
||||
FAAS_MAX_CONCURRENT: 100
|
||||
FAAS_SANDBOX_ENABLED: true
|
||||
FAAS_NETWORK_ISOLATION: true
|
||||
FAAS_RESOURCE_LIMITS: true
|
||||
AUTH_PROVIDER: header
|
||||
AUTH_HEADER_USER_EMAIL: X-User-Email
|
||||
RATE_LIMIT_ENABLED: true
|
||||
METRICS_ENABLED: true
|
||||
METRICS_PORT: 9091
|
||||
ports:
|
||||
- "8083:8083"
|
||||
- "9091:9091" # Metrics port
|
||||
depends_on:
|
||||
faas-postgres:
|
||||
condition: service_healthy
|
||||
networks:
|
||||
- faas-network
|
||||
volumes:
|
||||
- /run/user/1000/podman:/run/user/1000/podman:z # Mount entire Podman runtime directory
|
||||
- ./migrations:/app/migrations:ro,Z
|
||||
cap_add:
|
||||
- SYS_ADMIN
|
||||
- MKNOD
|
||||
devices:
|
||||
- /dev/fuse
|
||||
security_opt:
|
||||
- label=disable
|
||||
restart: unless-stopped
|
||||
|
||||
# faas-frontend:
|
||||
# build:
|
||||
# context: ./web
|
||||
# dockerfile: Dockerfile
|
||||
# container_name: faas-frontend
|
||||
# ports:
|
||||
# - "3003:80"
|
||||
# networks:
|
||||
# - faas-network
|
||||
# restart: unless-stopped
|
||||
|
||||
volumes:
|
||||
faas_postgres_data:
|
||||
driver: local
|
||||
|
||||
networks:
|
||||
faas-network:
|
||||
driver: bridge
|
||||
@ -1,30 +0,0 @@
|
||||
# Build stage
|
||||
FROM golang:1.23-alpine AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy go mod files
|
||||
COPY go.mod go.sum ./
|
||||
|
||||
# Download dependencies
|
||||
RUN go mod download
|
||||
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
# Build the binary
|
||||
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o handler .
|
||||
|
||||
# Final stage
|
||||
FROM alpine:latest
|
||||
|
||||
# Install ca-certificates
|
||||
RUN apk --no-cache add ca-certificates
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy the binary from builder stage
|
||||
COPY --from=builder /app/handler .
|
||||
|
||||
# Run the handler
|
||||
CMD ["./handler"]
|
||||
@ -1,75 +0,0 @@
|
||||
# Hello World Function Example
|
||||
|
||||
This is a simple example function that demonstrates how to create and deploy functions in the Skybridge FaaS platform.
|
||||
|
||||
## Function Description
|
||||
|
||||
The function takes a JSON input with an optional `name` field and returns a greeting message.
|
||||
|
||||
### Input Format
|
||||
```json
|
||||
{
|
||||
"name": "John"
|
||||
}
|
||||
```
|
||||
|
||||
### Output Format
|
||||
```json
|
||||
{
|
||||
"message": "Hello, John!",
|
||||
"input": {
|
||||
"name": "John"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Building the Function
|
||||
|
||||
To build the function as a Docker image:
|
||||
|
||||
```bash
|
||||
docker build -t hello-world-function .
|
||||
```
|
||||
|
||||
## Testing the Function Locally
|
||||
|
||||
To test the function locally:
|
||||
|
||||
```bash
|
||||
# Test with a name
|
||||
docker run -e FUNCTION_INPUT='{"name": "Alice"}' hello-world-function
|
||||
|
||||
# Test without a name (defaults to "World")
|
||||
docker run hello-world-function
|
||||
```
|
||||
|
||||
## Deploying to Skybridge FaaS
|
||||
|
||||
Once you have the Skybridge FaaS platform running, you can deploy this function using the API:
|
||||
|
||||
1. Create the function:
|
||||
```bash
|
||||
curl -X POST http://localhost:8083/api/functions \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-User-Email: test@example.com" \
|
||||
-d '{
|
||||
"name": "hello-world",
|
||||
"image": "hello-world-function",
|
||||
"runtime": "custom",
|
||||
"memory": 128,
|
||||
"timeout": "30s"
|
||||
}'
|
||||
```
|
||||
|
||||
2. Deploy the function:
|
||||
```bash
|
||||
curl -X POST http://localhost:8083/api/functions/{function-id}/deploy \
|
||||
-H "X-User-Email: test@example.com"
|
||||
```
|
||||
|
||||
3. Execute the function:
|
||||
```bash
|
||||
curl -X POST http://localhost:8083/api/functions/{function-id}/execute \
|
||||
-H "Content-Type: application/json" \
|
||||
-H "X-User-Email: test@example.com" \
|
||||
-d '{"input": {"name": "Bob"}}'
|
||||
@ -1,23 +0,0 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Build script for hello-world function
|
||||
|
||||
set -e
|
||||
|
||||
echo "Building hello-world function..."
|
||||
|
||||
# Build the Docker image
|
||||
docker build -t hello-world-function .
|
||||
|
||||
echo "Testing the function locally..."
|
||||
|
||||
# Test without input
|
||||
echo "Test 1: No input"
|
||||
docker run --rm hello-world-function
|
||||
|
||||
echo ""
|
||||
echo "Test 2: With name input"
|
||||
docker run --rm -e FUNCTION_INPUT='{"name": "Alice"}' hello-world-function
|
||||
|
||||
echo ""
|
||||
echo "Function built and tested successfully!"
|
||||
@ -1,5 +0,0 @@
|
||||
module hello-world-function
|
||||
|
||||
go 1.23.0
|
||||
|
||||
toolchain go1.24.4
|
||||
@ -1,44 +0,0 @@
|
||||
package main
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"os"
|
||||
)
|
||||
|
||||
func main() {
|
||||
// Read input from environment variable
|
||||
input := os.Getenv("FUNCTION_INPUT")
|
||||
if input == "" {
|
||||
input = "{}"
|
||||
}
|
||||
|
||||
// Parse input
|
||||
var inputData map[string]interface{}
|
||||
if err := json.Unmarshal([]byte(input), &inputData); err != nil {
|
||||
fmt.Printf("Error parsing input: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
// Process the input and generate output
|
||||
name, ok := inputData["name"].(string)
|
||||
if !ok {
|
||||
name = "World"
|
||||
}
|
||||
|
||||
message := fmt.Sprintf("Hello, %s!", name)
|
||||
|
||||
// Output result as JSON
|
||||
result := map[string]interface{}{
|
||||
"message": message,
|
||||
"input": inputData,
|
||||
}
|
||||
|
||||
output, err := json.Marshal(result)
|
||||
if err != nil {
|
||||
fmt.Printf("Error marshaling output: %v\n", err)
|
||||
os.Exit(1)
|
||||
}
|
||||
|
||||
fmt.Println(string(output))
|
||||
}
|
||||
68
faas/go.mod
68
faas/go.mod
@ -1,68 +0,0 @@
|
||||
module github.com/RyanCopley/skybridge/faas
|
||||
|
||||
go 1.23.0
|
||||
|
||||
toolchain go1.24.4
|
||||
|
||||
require (
|
||||
github.com/docker/docker v28.3.3+incompatible
|
||||
github.com/docker/go-connections v0.4.0
|
||||
github.com/gin-gonic/gin v1.9.1
|
||||
github.com/google/uuid v1.6.0
|
||||
github.com/lib/pq v1.10.9
|
||||
go.uber.org/zap v1.26.0
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/Microsoft/go-winio v0.6.2 // indirect
|
||||
github.com/bytedance/sonic v1.9.1 // indirect
|
||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
|
||||
github.com/containerd/errdefs v1.0.0 // indirect
|
||||
github.com/containerd/errdefs/pkg v0.3.0 // indirect
|
||||
github.com/containerd/log v0.1.0 // indirect
|
||||
github.com/distribution/reference v0.6.0 // indirect
|
||||
github.com/docker/go-units v0.5.0 // indirect
|
||||
github.com/felixge/httpsnoop v1.0.4 // indirect
|
||||
github.com/gabriel-vasile/mimetype v1.4.2 // indirect
|
||||
github.com/gin-contrib/sse v0.1.0 // indirect
|
||||
github.com/go-logr/logr v1.4.3 // indirect
|
||||
github.com/go-logr/stdr v1.2.2 // indirect
|
||||
github.com/go-playground/locales v0.14.1 // indirect
|
||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||
github.com/go-playground/validator/v10 v10.16.0 // indirect
|
||||
github.com/goccy/go-json v0.10.2 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
|
||||
github.com/leodido/go-urn v1.2.4 // indirect
|
||||
github.com/mattn/go-isatty v0.0.19 // indirect
|
||||
github.com/moby/docker-image-spec v1.3.1 // indirect
|
||||
github.com/moby/sys/atomicwriter v0.1.0 // indirect
|
||||
github.com/moby/term v0.5.2 // indirect
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect
|
||||
github.com/modern-go/reflect2 v1.0.2 // indirect
|
||||
github.com/morikuni/aec v1.0.0 // indirect
|
||||
github.com/opencontainers/go-digest v1.0.0 // indirect
|
||||
github.com/opencontainers/image-spec v1.1.1 // indirect
|
||||
github.com/pelletier/go-toml/v2 v2.0.8 // indirect
|
||||
github.com/pkg/errors v0.9.1 // indirect
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||
github.com/ugorji/go/codec v1.2.11 // indirect
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 // indirect
|
||||
go.opentelemetry.io/otel v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/metric v1.38.0 // indirect
|
||||
go.opentelemetry.io/otel/trace v1.38.0 // indirect
|
||||
go.uber.org/goleak v1.3.0 // indirect
|
||||
go.uber.org/multierr v1.10.0 // indirect
|
||||
golang.org/x/arch v0.3.0 // indirect
|
||||
golang.org/x/crypto v0.41.0 // indirect
|
||||
golang.org/x/net v0.43.0 // indirect
|
||||
golang.org/x/sys v0.35.0 // indirect
|
||||
golang.org/x/text v0.28.0 // indirect
|
||||
golang.org/x/time v0.12.0 // indirect
|
||||
google.golang.org/protobuf v1.36.8 // indirect
|
||||
gopkg.in/yaml.v3 v3.0.1 // indirect
|
||||
gotest.tools/v3 v3.5.2 // indirect
|
||||
)
|
||||
208
faas/go.sum
208
faas/go.sum
@ -1,208 +0,0 @@
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c h1:udKWzYgxTojEKWjV8V+WSxDXJ4NFATAsZjh8iIbsQIg=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20250102033503-faa5f7b0171c/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/Microsoft/go-winio v0.6.2 h1:F2VQgta7ecxGYO8k3ZZz3RS8fVIXVxONVUPlNERoyfY=
|
||||
github.com/Microsoft/go-winio v0.6.2/go.mod h1:yd8OoFMLzJbo9gZq8j5qaps8bJ9aShtEA8Ipt1oGCvU=
|
||||
github.com/bytedance/sonic v1.5.0/go.mod h1:ED5hyg4y6t3/9Ku1R6dU/4KyJ48DZ4jPhfY1O2AihPM=
|
||||
github.com/bytedance/sonic v1.9.1 h1:6iJ6NqdoxCDr6mbY8h18oSO+cShGSMRGCEo7F2h0x8s=
|
||||
github.com/bytedance/sonic v1.9.1/go.mod h1:i736AoUSYt75HyZLoJW9ERYxcy6eaN6h4BZXU064P/U=
|
||||
github.com/cenkalti/backoff/v5 v5.0.3 h1:ZN+IMa753KfX5hd8vVaMixjnqRZ3y8CuJKRKj1xcsSM=
|
||||
github.com/cenkalti/backoff/v5 v5.0.3/go.mod h1:rkhZdG3JZukswDf7f0cwqPNk4K0sa+F97BxZthm/crw=
|
||||
github.com/chenzhuoyu/base64x v0.0.0-20211019084208-fb5309c8db06/go.mod h1:DH46F32mSOjUmXrMHnKwZdA8wcEefY7UVqBKYGjpdQY=
|
||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 h1:qSGYFH7+jGhDF8vLC+iwCD4WpbV1EBDSzWkJODFLams=
|
||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311/go.mod h1:b583jCggY9gE99b6G5LEC39OIiVsWj+R97kbl5odCEk=
|
||||
github.com/containerd/errdefs v1.0.0 h1:tg5yIfIlQIrxYtu9ajqY42W3lpS19XqdxRQeEwYG8PI=
|
||||
github.com/containerd/errdefs v1.0.0/go.mod h1:+YBYIdtsnF4Iw6nWZhJcqGSg/dwvV7tyJ/kCkyJ2k+M=
|
||||
github.com/containerd/errdefs/pkg v0.3.0 h1:9IKJ06FvyNlexW690DXuQNx2KA2cUJXx151Xdx3ZPPE=
|
||||
github.com/containerd/errdefs/pkg v0.3.0/go.mod h1:NJw6s9HwNuRhnjJhM7pylWwMyAkmCQvQ4GpJHEqRLVk=
|
||||
github.com/containerd/log v0.1.0 h1:TCJt7ioM2cr/tfR8GPbGf9/VRAX8D2B4PjzCpfX540I=
|
||||
github.com/containerd/log v0.1.0/go.mod h1:VRRf09a7mHDIRezVKTRCrOq78v577GXq3bSa3EhrzVo=
|
||||
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/distribution/reference v0.6.0 h1:0IXCQ5g4/QMHHkarYzh5l+u8T3t73zM5QvfrDyIgxBk=
|
||||
github.com/distribution/reference v0.6.0/go.mod h1:BbU0aIcezP1/5jX/8MP0YiH4SdvB5Y4f/wlDRiLyi3E=
|
||||
github.com/docker/docker v28.3.3+incompatible h1:Dypm25kh4rmk49v1eiVbsAtpAsYURjYkaKubwuBdxEI=
|
||||
github.com/docker/docker v28.3.3+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
|
||||
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
|
||||
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
||||
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/felixge/httpsnoop v1.0.4 h1:NFTV2Zj1bL4mc9sqWACXbQFVBBg2W3GPvqp8/ESS2Wg=
|
||||
github.com/felixge/httpsnoop v1.0.4/go.mod h1:m8KPJKqk1gH5J9DgRY2ASl2lWCfGKXixSwevea8zH2U=
|
||||
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
|
||||
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
|
||||
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
|
||||
github.com/gin-contrib/sse v0.1.0/go.mod h1:RHrZQHXnP2xjPF+u1gW/2HnVO7nvIa9PG3Gm+fLHvGI=
|
||||
github.com/gin-gonic/gin v1.9.1 h1:4idEAncQnU5cB7BeOkPtxjfCSye0AAm1R0RVIqJ+Jmg=
|
||||
github.com/gin-gonic/gin v1.9.1/go.mod h1:hPrL7YrpYKXt5YId3A/Tnip5kqbEAP+KLuI3SUcPTeU=
|
||||
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
|
||||
github.com/go-logr/logr v1.4.3 h1:CjnDlHq8ikf6E492q6eKboGOC0T8CDaOvkHCIg8idEI=
|
||||
github.com/go-logr/logr v1.4.3/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
|
||||
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
|
||||
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
|
||||
github.com/go-playground/assert/v2 v2.2.0 h1:JvknZsQTYeFEAhQwI4qEt9cyV5ONwRHC+lYKSsYSR8s=
|
||||
github.com/go-playground/assert/v2 v2.2.0/go.mod h1:VDjEfimB/XKnb+ZQfWdccd7VUvScMdVu0Titje2rxJ4=
|
||||
github.com/go-playground/locales v0.14.1 h1:EWaQ/wswjilfKLTECiXz7Rh+3BjFhfDFKv/oXslEjJA=
|
||||
github.com/go-playground/locales v0.14.1/go.mod h1:hxrqLVvrK65+Rwrd5Fc6F2O76J/NuW9t0sjnWqG1slY=
|
||||
github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJnYK9S473LQFuzCbDbfSFY=
|
||||
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
||||
github.com/go-playground/validator/v10 v10.16.0 h1:x+plE831WK4vaKHO/jpgUGsvLKIqRRkz6M78GuJAfGE=
|
||||
github.com/go-playground/validator/v10 v10.16.0/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
|
||||
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
|
||||
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
|
||||
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
|
||||
github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg=
|
||||
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
|
||||
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2 h1:8Tjv8EJ+pM1xP8mK6egEbD1OgnVTyacbefKhmbLhIhU=
|
||||
github.com/grpc-ecosystem/grpc-gateway/v2 v2.27.2/go.mod h1:pkJQ2tZHJ0aFOVEEot6oZmaVEZcRme73eIFmhiVuRWs=
|
||||
github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM=
|
||||
github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo=
|
||||
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
|
||||
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
|
||||
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
|
||||
github.com/klauspost/cpuid/v2 v2.2.4 h1:acbojRNwl3o09bUq+yDCtZFc1aiwaAAxtcn8YkZXnvk=
|
||||
github.com/klauspost/cpuid/v2 v2.2.4/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
|
||||
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
|
||||
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
|
||||
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
|
||||
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
|
||||
github.com/leodido/go-urn v1.2.4 h1:XlAE/cm/ms7TE/VMVoduSpNBoyc2dOxHs5MZSwAN63Q=
|
||||
github.com/leodido/go-urn v1.2.4/go.mod h1:7ZrI8mTSeBSHl/UaRyKQW1qZeMgak41ANeCNaVckg+4=
|
||||
github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
|
||||
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/moby/docker-image-spec v1.3.1 h1:jMKff3w6PgbfSa69GfNg+zN/XLhfXJGnEx3Nl2EsFP0=
|
||||
github.com/moby/docker-image-spec v1.3.1/go.mod h1:eKmb5VW8vQEh/BAr2yvVNvuiJuY6UIocYsFu/DxxRpo=
|
||||
github.com/moby/sys/atomicwriter v0.1.0 h1:kw5D/EqkBwsBFi0ss9v1VG3wIkVhzGvLklJ+w3A14Sw=
|
||||
github.com/moby/sys/atomicwriter v0.1.0/go.mod h1:Ul8oqv2ZMNHOceF643P6FKPXeCmYtlQMvpizfsSoaWs=
|
||||
github.com/moby/sys/sequential v0.6.0 h1:qrx7XFUd/5DxtqcoH1h438hF5TmOvzC/lspjy7zgvCU=
|
||||
github.com/moby/sys/sequential v0.6.0/go.mod h1:uyv8EUTrca5PnDsdMGXhZe6CCe8U/UiTWd+lL+7b/Ko=
|
||||
github.com/moby/term v0.5.2 h1:6qk3FJAFDs6i/q3W/pQ97SX192qKfZgGjCQqfCJkgzQ=
|
||||
github.com/moby/term v0.5.2/go.mod h1:d3djjFCrjnB+fl8NJux+EJzu0msscUP+f8it8hPkFLc=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
|
||||
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.1.1 h1:y0fUlFfIZhPF1W537XOLg0/fcx6zcHCJwooC2xJA040=
|
||||
github.com/opencontainers/image-spec v1.1.1/go.mod h1:qpqAh3Dmcf36wStyyWU+kCeDgrGnAve2nCC8+7h8Q0M=
|
||||
github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZR9tGQ=
|
||||
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rogpeppe/go-internal v1.13.1 h1:KvO1DLK/DRN07sQ1LQKScxyZJuNnedQ5/wKSR38lUII=
|
||||
github.com/rogpeppe/go-internal v1.13.1/go.mod h1:uMEvuHeurkdAXX61udpOXGD/AzZDWNMNyH2VO9fmH0o=
|
||||
github.com/sirupsen/logrus v1.9.3 h1:dueUQJ1C2q9oE3F7wvmSGAaVtTmUizReu6fjN8uqzbQ=
|
||||
github.com/sirupsen/logrus v1.9.3/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.2/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
|
||||
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
|
||||
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS4MhqMhdFk5YI=
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
||||
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
|
||||
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
|
||||
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
|
||||
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0 h1:RbKq8BG0FI8OiXhBfcRtqqHcZcka+gU3cskNuf05R18=
|
||||
go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp v0.63.0/go.mod h1:h06DGIukJOevXaj/xrNjhi/2098RZzcLTbc0jDAUbsg=
|
||||
go.opentelemetry.io/otel v1.38.0 h1:RkfdswUDRimDg0m2Az18RKOsnI8UDzppJAtj01/Ymk8=
|
||||
go.opentelemetry.io/otel v1.38.0/go.mod h1:zcmtmQ1+YmQM9wrNsTGV/q/uyusom3P8RxwExxkZhjM=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0 h1:GqRJVj7UmLjCVyVJ3ZFLdPRmhDUp2zFmQe3RHIOsw24=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace v1.38.0/go.mod h1:ri3aaHSmCTVYu2AWv44YMauwAQc0aqI9gHKIcSbI1pU=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0 h1:aTL7F04bJHUlztTsNGJ2l+6he8c+y/b//eR0jjjemT4=
|
||||
go.opentelemetry.io/otel/exporters/otlp/otlptrace/otlptracehttp v1.38.0/go.mod h1:kldtb7jDTeol0l3ewcmd8SDvx3EmIE7lyvqbasU3QC4=
|
||||
go.opentelemetry.io/otel/metric v1.38.0 h1:Kl6lzIYGAh5M159u9NgiRkmoMKjvbsKtYRwgfrA6WpA=
|
||||
go.opentelemetry.io/otel/metric v1.38.0/go.mod h1:kB5n/QoRM8YwmUahxvI3bO34eVtQf2i4utNVLr9gEmI=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0 h1:l48sr5YbNf2hpCUj/FoGhW9yDkl+Ma+LrVl8qaM5b+E=
|
||||
go.opentelemetry.io/otel/sdk v1.38.0/go.mod h1:ghmNdGlVemJI3+ZB5iDEuk4bWA3GkTpW+DOoZMYBVVg=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0 h1:aSH66iL0aZqo//xXzQLYozmWrXxyFkBJ6qT5wthqPoM=
|
||||
go.opentelemetry.io/otel/sdk/metric v1.38.0/go.mod h1:dg9PBnW9XdQ1Hd6ZnRz689CbtrUp0wMMs9iPcgT9EZA=
|
||||
go.opentelemetry.io/otel/trace v1.38.0 h1:Fxk5bKrDZJUH+AMyyIXGcFAPah0oRcT+LuNtJrmcNLE=
|
||||
go.opentelemetry.io/otel/trace v1.38.0/go.mod h1:j1P9ivuFsTceSWe1oY+EeW3sc+Pp42sO++GHkg4wwhs=
|
||||
go.opentelemetry.io/proto/otlp v1.7.1 h1:gTOMpGDb0WTBOP8JaO72iL3auEZhVmAQg4ipjOVAtj4=
|
||||
go.opentelemetry.io/proto/otlp v1.7.1/go.mod h1:b2rVh6rfI/s2pHWNlB7ILJcRALpcNDzKhACevjI+ZnE=
|
||||
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
|
||||
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
|
||||
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
|
||||
go.uber.org/multierr v1.10.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
|
||||
go.uber.org/zap v1.26.0 h1:sI7k6L95XOKS281NhVKOFCUNIvv9e0w4BF8N3u+tCRo=
|
||||
go.uber.org/zap v1.26.0/go.mod h1:dtElttAiwGvoJ/vj4IwHBS/gXsEu/pZ50mUIRWuG0so=
|
||||
golang.org/x/arch v0.0.0-20210923205945-b76863e36670/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||
golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
|
||||
golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
golang.org/x/crypto v0.41.0 h1:WKYxWedPGCTVVl5+WHSSrOBT0O8lx32+zxmHxijgXp4=
|
||||
golang.org/x/crypto v0.41.0/go.mod h1:pO5AFd7FA68rFak7rOAGVuygIISepHftHnr8dr6+sUc=
|
||||
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
|
||||
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
|
||||
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
|
||||
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
|
||||
golang.org/x/net v0.43.0 h1:lat02VYK2j4aLzMzecihNvTlJNQUq316m2Mr9rnM6YE=
|
||||
golang.org/x/net v0.43.0/go.mod h1:vhO1fvI4dGsIjh73sWfUVjj3N7CA9WkKJNQm2svM6Jg=
|
||||
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.35.0 h1:vz1N37gP5bs89s7He8XuIYXpyY0+QlsKmzipCbUtyxI=
|
||||
golang.org/x/sys v0.35.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.28.0 h1:rhazDwis8INMIwQ4tpjLDzUhx6RlXqZNPEM0huQojng=
|
||||
golang.org/x/text v0.28.0/go.mod h1:U8nCwOR8jO/marOQ0QbDiOngZVEBB7MAiitBuMjXiNU=
|
||||
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
|
||||
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
|
||||
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
|
||||
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
|
||||
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
|
||||
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5 h1:BIRfGDEjiHRrk0QKZe3Xv2ieMhtgRGeLcZQ0mIVn4EY=
|
||||
google.golang.org/genproto/googleapis/api v0.0.0-20250825161204-c5933d9347a5/go.mod h1:j3QtIyytwqGr1JUDtYXwtMXWPKsEa5LtzIFN1Wn5WvE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5 h1:eaY8u2EuxbRv7c3NiGK0/NedzVsCcV6hDuU5qPX5EGE=
|
||||
google.golang.org/genproto/googleapis/rpc v0.0.0-20250825161204-c5933d9347a5/go.mod h1:M4/wBTSeyLxupu3W3tJtOgB14jILAS/XWPSSa3TAlJc=
|
||||
google.golang.org/grpc v1.75.0 h1:+TW+dqTd2Biwe6KKfhE5JpiYIBWq865PhKGSXiivqt4=
|
||||
google.golang.org/grpc v1.75.0/go.mod h1:JtPAzKiq4v1xcAB2hydNlWI2RnF85XXcV0mhKXr2ecQ=
|
||||
google.golang.org/protobuf v1.36.8 h1:xHScyCOEuuwZEc6UtSOvPbAT4zRh0xcNRYekJwfqyMc=
|
||||
google.golang.org/protobuf v1.36.8/go.mod h1:fuxRtAxBytpl4zzqUh6/eyUujkJdNiuEkXntxiD/uRU=
|
||||
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
|
||||
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
|
||||
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
|
||||
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
|
||||
gotest.tools/v3 v3.5.2 h1:7koQfIKdy+I8UTetycgUqXWSDwpgv193Ka+qRsmBY8Q=
|
||||
gotest.tools/v3 v3.5.2/go.mod h1:LtdLGcnqToBH83WByAAi/wiwSFCArdFIUV/xxN4pcjA=
|
||||
rsc.io/pdf v0.1.1/go.mod h1:n8OzWcQ6Sp37PL01nO98y4iUCRdTGarVfzxY20ICaU4=
|
||||
@ -1,192 +0,0 @@
|
||||
package config
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Config struct {
|
||||
env map[string]string
|
||||
}
|
||||
|
||||
type ConfigProvider interface {
|
||||
GetString(key string) string
|
||||
GetInt(key string) int
|
||||
GetBool(key string) bool
|
||||
GetDuration(key string) time.Duration
|
||||
GetServerAddress() string
|
||||
GetDatabaseDSN() string
|
||||
GetDatabaseDSNForLogging() string
|
||||
IsProduction() bool
|
||||
Validate() error
|
||||
}
|
||||
|
||||
func NewConfig() ConfigProvider {
|
||||
env := make(map[string]string)
|
||||
|
||||
// Load environment variables
|
||||
for _, pair := range os.Environ() {
|
||||
parts := strings.SplitN(pair, "=", 2)
|
||||
if len(parts) == 2 {
|
||||
env[parts[0]] = parts[1]
|
||||
}
|
||||
}
|
||||
|
||||
// Set defaults
|
||||
setDefault(env, "FAAS_SERVER_HOST", "0.0.0.0")
|
||||
setDefault(env, "FAAS_SERVER_PORT", "8082")
|
||||
setDefault(env, "FAAS_DB_HOST", "localhost")
|
||||
setDefault(env, "FAAS_DB_PORT", "5432")
|
||||
setDefault(env, "FAAS_DB_NAME", "faas")
|
||||
setDefault(env, "FAAS_DB_USER", "postgres")
|
||||
setDefault(env, "FAAS_DB_PASSWORD", "postgres")
|
||||
setDefault(env, "FAAS_DB_SSLMODE", "disable")
|
||||
setDefault(env, "FAAS_APP_ENV", "development")
|
||||
setDefault(env, "FAAS_LOG_LEVEL", "debug")
|
||||
setDefault(env, "FAAS_DEFAULT_RUNTIME", "docker")
|
||||
setDefault(env, "FAAS_FUNCTION_TIMEOUT", "300s")
|
||||
setDefault(env, "FAAS_MAX_MEMORY", "3008")
|
||||
setDefault(env, "FAAS_MAX_CONCURRENT", "100")
|
||||
setDefault(env, "FAAS_SANDBOX_ENABLED", "true")
|
||||
setDefault(env, "FAAS_NETWORK_ISOLATION", "true")
|
||||
setDefault(env, "FAAS_RESOURCE_LIMITS", "true")
|
||||
setDefault(env, "SERVER_READ_TIMEOUT", "30s")
|
||||
setDefault(env, "SERVER_WRITE_TIMEOUT", "30s")
|
||||
setDefault(env, "SERVER_IDLE_TIMEOUT", "120s")
|
||||
setDefault(env, "RATE_LIMIT_ENABLED", "true")
|
||||
setDefault(env, "RATE_LIMIT_RPS", "100")
|
||||
setDefault(env, "RATE_LIMIT_BURST", "200")
|
||||
setDefault(env, "METRICS_ENABLED", "true")
|
||||
setDefault(env, "METRICS_PORT", "9091")
|
||||
setDefault(env, "AUTH_PROVIDER", "header")
|
||||
setDefault(env, "AUTH_HEADER_USER_EMAIL", "X-User-Email")
|
||||
|
||||
return &Config{env: env}
|
||||
}
|
||||
|
||||
func setDefault(env map[string]string, key, value string) {
|
||||
if _, exists := env[key]; !exists {
|
||||
env[key] = value
|
||||
}
|
||||
}
|
||||
|
||||
func (c *Config) GetString(key string) string {
|
||||
return c.env[key]
|
||||
}
|
||||
|
||||
func (c *Config) GetInt(key string) int {
|
||||
val := c.env[key]
|
||||
if val == "" {
|
||||
return 0
|
||||
}
|
||||
|
||||
intVal, err := strconv.Atoi(val)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return intVal
|
||||
}
|
||||
|
||||
func (c *Config) GetBool(key string) bool {
|
||||
val := strings.ToLower(c.env[key])
|
||||
return val == "true" || val == "1" || val == "yes" || val == "on"
|
||||
}
|
||||
|
||||
func (c *Config) GetDuration(key string) time.Duration {
|
||||
val := c.env[key]
|
||||
if val == "" {
|
||||
return 0
|
||||
}
|
||||
|
||||
duration, err := time.ParseDuration(val)
|
||||
if err != nil {
|
||||
return 0
|
||||
}
|
||||
|
||||
return duration
|
||||
}
|
||||
|
||||
func (c *Config) GetServerAddress() string {
|
||||
host := c.GetString("FAAS_SERVER_HOST")
|
||||
port := c.GetString("FAAS_SERVER_PORT")
|
||||
return fmt.Sprintf("%s:%s", host, port)
|
||||
}
|
||||
|
||||
func (c *Config) GetDatabaseDSN() string {
|
||||
host := c.GetString("FAAS_DB_HOST")
|
||||
port := c.GetString("FAAS_DB_PORT")
|
||||
name := c.GetString("FAAS_DB_NAME")
|
||||
user := c.GetString("FAAS_DB_USER")
|
||||
password := c.GetString("FAAS_DB_PASSWORD")
|
||||
sslmode := c.GetString("FAAS_DB_SSLMODE")
|
||||
|
||||
return fmt.Sprintf("host=%s port=%s user=%s password=%s dbname=%s sslmode=%s",
|
||||
host, port, user, password, name, sslmode)
|
||||
}
|
||||
|
||||
func (c *Config) GetDatabaseDSNForLogging() string {
|
||||
host := c.GetString("FAAS_DB_HOST")
|
||||
port := c.GetString("FAAS_DB_PORT")
|
||||
name := c.GetString("FAAS_DB_NAME")
|
||||
user := c.GetString("FAAS_DB_USER")
|
||||
sslmode := c.GetString("FAAS_DB_SSLMODE")
|
||||
|
||||
return fmt.Sprintf("host=%s port=%s user=%s password=*** dbname=%s sslmode=%s",
|
||||
host, port, user, name, sslmode)
|
||||
}
|
||||
|
||||
func (c *Config) IsProduction() bool {
|
||||
env := strings.ToLower(c.GetString("FAAS_APP_ENV"))
|
||||
return env == "production" || env == "prod"
|
||||
}
|
||||
|
||||
func (c *Config) GetMetricsAddress() string {
|
||||
host := c.GetString("FAAS_SERVER_HOST")
|
||||
port := c.GetString("METRICS_PORT")
|
||||
return fmt.Sprintf("%s:%s", host, port)
|
||||
}
|
||||
|
||||
func (c *Config) Validate() error {
|
||||
required := []string{
|
||||
"FAAS_SERVER_HOST",
|
||||
"FAAS_SERVER_PORT",
|
||||
"FAAS_DB_HOST",
|
||||
"FAAS_DB_PORT",
|
||||
"FAAS_DB_NAME",
|
||||
"FAAS_DB_USER",
|
||||
"FAAS_DB_PASSWORD",
|
||||
}
|
||||
|
||||
for _, key := range required {
|
||||
if c.GetString(key) == "" {
|
||||
return fmt.Errorf("required environment variable %s is not set", key)
|
||||
}
|
||||
}
|
||||
|
||||
// Validate server port
|
||||
if c.GetInt("FAAS_SERVER_PORT") <= 0 || c.GetInt("FAAS_SERVER_PORT") > 65535 {
|
||||
return fmt.Errorf("invalid server port: %s", c.GetString("FAAS_SERVER_PORT"))
|
||||
}
|
||||
|
||||
// Validate database port
|
||||
if c.GetInt("FAAS_DB_PORT") <= 0 || c.GetInt("FAAS_DB_PORT") > 65535 {
|
||||
return fmt.Errorf("invalid database port: %s", c.GetString("FAAS_DB_PORT"))
|
||||
}
|
||||
|
||||
// Validate timeout
|
||||
if c.GetDuration("FAAS_FUNCTION_TIMEOUT") <= 0 {
|
||||
return fmt.Errorf("invalid function timeout: %s", c.GetString("FAAS_FUNCTION_TIMEOUT"))
|
||||
}
|
||||
|
||||
// Validate memory limit
|
||||
maxMemory := c.GetInt("FAAS_MAX_MEMORY")
|
||||
if maxMemory <= 0 || maxMemory > 10240 { // Max 10GB
|
||||
return fmt.Errorf("invalid max memory: %s", c.GetString("FAAS_MAX_MEMORY"))
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
@ -1,60 +0,0 @@
|
||||
package database
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
_ "github.com/lib/pq"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
type PostgresProvider struct {
|
||||
db *sql.DB
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewPostgresProvider(dsn string, maxOpenConns, maxIdleConns int, connMaxLifetime string, logger *zap.Logger) (*sql.DB, error) {
|
||||
db, err := sql.Open("postgres", dsn)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to open database connection: %w", err)
|
||||
}
|
||||
|
||||
// Configure connection pool
|
||||
db.SetMaxOpenConns(maxOpenConns)
|
||||
db.SetMaxIdleConns(maxIdleConns)
|
||||
|
||||
if connMaxLifetime != "" {
|
||||
lifetime, err := time.ParseDuration(connMaxLifetime)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid connection max lifetime: %w", err)
|
||||
}
|
||||
db.SetConnMaxLifetime(lifetime)
|
||||
}
|
||||
|
||||
// Test the connection
|
||||
if err := db.Ping(); err != nil {
|
||||
db.Close()
|
||||
return nil, fmt.Errorf("failed to ping database: %w", err)
|
||||
}
|
||||
|
||||
return db, nil
|
||||
}
|
||||
|
||||
func (p *PostgresProvider) Close() error {
|
||||
if p.db != nil {
|
||||
return p.db.Close()
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (p *PostgresProvider) Ping() error {
|
||||
if p.db != nil {
|
||||
return p.db.Ping()
|
||||
}
|
||||
return fmt.Errorf("database connection is nil")
|
||||
}
|
||||
|
||||
func (p *PostgresProvider) GetDB() *sql.DB {
|
||||
return p.db
|
||||
}
|
||||
@ -1,170 +0,0 @@
|
||||
package domain
|
||||
|
||||
import (
|
||||
"database/sql/driver"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strconv"
|
||||
"strings"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Duration struct {
|
||||
time.Duration
|
||||
}
|
||||
|
||||
func (d Duration) MarshalJSON() ([]byte, error) {
|
||||
return json.Marshal(d.Duration.String())
|
||||
}
|
||||
|
||||
func (d *Duration) UnmarshalJSON(b []byte) error {
|
||||
var s string
|
||||
if err := json.Unmarshal(b, &s); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
duration, err := time.ParseDuration(s)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
d.Duration = duration
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d Duration) Value() (driver.Value, error) {
|
||||
// Store as a PostgreSQL-compatible interval string
|
||||
return d.Duration.String(), nil
|
||||
}
|
||||
|
||||
func (d *Duration) Scan(value interface{}) error {
|
||||
if value == nil {
|
||||
d.Duration = 0
|
||||
return nil
|
||||
}
|
||||
|
||||
switch v := value.(type) {
|
||||
case int64:
|
||||
// Handle legacy nanosecond values that were incorrectly stored
|
||||
// If the value is extremely large (likely nanoseconds), convert it
|
||||
if v > 1000000000000 { // More than 16 minutes in nanoseconds, likely a nanosecond value
|
||||
d.Duration = time.Duration(v)
|
||||
} else {
|
||||
// Assume it's seconds for smaller values
|
||||
d.Duration = time.Duration(v) * time.Second
|
||||
}
|
||||
case string:
|
||||
duration, err := time.ParseDuration(v)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse duration string: %s", v)
|
||||
}
|
||||
d.Duration = duration
|
||||
case []uint8:
|
||||
// Handle PostgreSQL interval format
|
||||
intervalStr := string(v)
|
||||
|
||||
// Try parsing as Go duration first (for newer records)
|
||||
if duration, err := time.ParseDuration(intervalStr); err == nil {
|
||||
d.Duration = duration
|
||||
return nil
|
||||
}
|
||||
|
||||
// Handle PostgreSQL interval formats like "00:00:30" or "8333333:20:00"
|
||||
if strings.Contains(intervalStr, ":") {
|
||||
parts := strings.Split(intervalStr, ":")
|
||||
if len(parts) >= 2 {
|
||||
var hours, minutes, seconds float64
|
||||
var err error
|
||||
|
||||
switch len(parts) {
|
||||
case 2: // MM:SS
|
||||
minutes, err = strconv.ParseFloat(parts[0], 64)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse minutes from interval: %s", intervalStr)
|
||||
}
|
||||
seconds, err = strconv.ParseFloat(parts[1], 64)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse seconds from interval: %s", intervalStr)
|
||||
}
|
||||
case 3: // HH:MM:SS
|
||||
hours, err = strconv.ParseFloat(parts[0], 64)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse hours from interval: %s", intervalStr)
|
||||
}
|
||||
minutes, err = strconv.ParseFloat(parts[1], 64)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse minutes from interval: %s", intervalStr)
|
||||
}
|
||||
seconds, err = strconv.ParseFloat(parts[2], 64)
|
||||
if err != nil {
|
||||
return fmt.Errorf("cannot parse seconds from interval: %s", intervalStr)
|
||||
}
|
||||
default:
|
||||
return fmt.Errorf("unsupported interval format: %s", intervalStr)
|
||||
}
|
||||
|
||||
// Convert to duration
|
||||
totalSeconds := hours*3600 + minutes*60 + seconds
|
||||
d.Duration = time.Duration(totalSeconds * float64(time.Second))
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
// Handle PostgreSQL interval format like "30 seconds", "1 minute", etc.
|
||||
if strings.Contains(intervalStr, " ") {
|
||||
// Try to parse common PostgreSQL interval formats
|
||||
intervalStr = strings.TrimSpace(intervalStr)
|
||||
|
||||
// Replace PostgreSQL interval keywords with Go duration format
|
||||
intervalStr = strings.ReplaceAll(intervalStr, " seconds", "s")
|
||||
intervalStr = strings.ReplaceAll(intervalStr, " second", "s")
|
||||
intervalStr = strings.ReplaceAll(intervalStr, " minutes", "m")
|
||||
intervalStr = strings.ReplaceAll(intervalStr, " minute", "m")
|
||||
intervalStr = strings.ReplaceAll(intervalStr, " hours", "h")
|
||||
intervalStr = strings.ReplaceAll(intervalStr, " hour", "h")
|
||||
|
||||
if duration, err := time.ParseDuration(intervalStr); err == nil {
|
||||
d.Duration = duration
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
return fmt.Errorf("cannot parse PostgreSQL interval format: %s", intervalStr)
|
||||
default:
|
||||
return fmt.Errorf("cannot scan %T into Duration", value)
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func ParseDuration(s string) (Duration, error) {
|
||||
if s == "" {
|
||||
return Duration{}, fmt.Errorf("empty duration string")
|
||||
}
|
||||
|
||||
s = strings.TrimSpace(s)
|
||||
|
||||
duration, err := time.ParseDuration(s)
|
||||
if err != nil {
|
||||
return Duration{}, fmt.Errorf("failed to parse duration '%s': %v", s, err)
|
||||
}
|
||||
|
||||
return Duration{Duration: duration}, nil
|
||||
}
|
||||
|
||||
func (d Duration) String() string {
|
||||
return d.Duration.String()
|
||||
}
|
||||
|
||||
func (d Duration) Seconds() float64 {
|
||||
return d.Duration.Seconds()
|
||||
}
|
||||
|
||||
func (d Duration) Minutes() float64 {
|
||||
return d.Duration.Minutes()
|
||||
}
|
||||
|
||||
func (d Duration) Hours() float64 {
|
||||
return d.Duration.Hours()
|
||||
}
|
||||
|
||||
@ -1,164 +0,0 @@
|
||||
package domain
|
||||
|
||||
import (
|
||||
"encoding/json"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// RuntimeType represents supported function runtimes
|
||||
type RuntimeType string
|
||||
|
||||
const (
|
||||
RuntimeNodeJS18 RuntimeType = "nodejs18"
|
||||
RuntimePython39 RuntimeType = "python3.9"
|
||||
RuntimeGo120 RuntimeType = "go1.20"
|
||||
RuntimeCustom RuntimeType = "custom"
|
||||
)
|
||||
|
||||
// ExecutionStatus represents the status of function execution
|
||||
type ExecutionStatus string
|
||||
|
||||
const (
|
||||
StatusPending ExecutionStatus = "pending"
|
||||
StatusRunning ExecutionStatus = "running"
|
||||
StatusCompleted ExecutionStatus = "completed"
|
||||
StatusFailed ExecutionStatus = "failed"
|
||||
StatusTimeout ExecutionStatus = "timeout"
|
||||
StatusCanceled ExecutionStatus = "canceled"
|
||||
)
|
||||
|
||||
// OwnerType represents the type of owner
|
||||
type OwnerType string
|
||||
|
||||
const (
|
||||
OwnerTypeIndividual OwnerType = "individual"
|
||||
OwnerTypeTeam OwnerType = "team"
|
||||
)
|
||||
|
||||
// Owner represents ownership information
|
||||
type Owner struct {
|
||||
Type OwnerType `json:"type" validate:"required,oneof=individual team"`
|
||||
Name string `json:"name" validate:"required,min=1,max=255"`
|
||||
Owner string `json:"owner" validate:"required,min=1,max=255"`
|
||||
}
|
||||
|
||||
// FunctionDefinition represents a serverless function
|
||||
type FunctionDefinition struct {
|
||||
ID uuid.UUID `json:"id" db:"id"`
|
||||
Name string `json:"name" validate:"required,min=1,max=255" db:"name"`
|
||||
AppID string `json:"app_id" validate:"required" db:"app_id"`
|
||||
Runtime RuntimeType `json:"runtime" validate:"required" db:"runtime"`
|
||||
Image string `json:"image" validate:"required" db:"image"`
|
||||
Handler string `json:"handler" validate:"required" db:"handler"`
|
||||
Code string `json:"code,omitempty" db:"code"`
|
||||
Environment map[string]string `json:"environment,omitempty" db:"environment"`
|
||||
Timeout Duration `json:"timeout" validate:"required" db:"timeout"`
|
||||
Memory int `json:"memory" validate:"required,min=64,max=3008" db:"memory"`
|
||||
Owner Owner `json:"owner" validate:"required"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
UpdatedAt time.Time `json:"updated_at" db:"updated_at"`
|
||||
}
|
||||
|
||||
// FunctionExecution represents a function execution
|
||||
type FunctionExecution struct {
|
||||
ID uuid.UUID `json:"id" db:"id"`
|
||||
FunctionID uuid.UUID `json:"function_id" db:"function_id"`
|
||||
Status ExecutionStatus `json:"status" db:"status"`
|
||||
Input json.RawMessage `json:"input,omitempty" db:"input"`
|
||||
Output json.RawMessage `json:"output,omitempty" db:"output"`
|
||||
Error string `json:"error,omitempty" db:"error"`
|
||||
Duration time.Duration `json:"duration" db:"duration"`
|
||||
MemoryUsed int `json:"memory_used" db:"memory_used"`
|
||||
Logs []string `json:"logs,omitempty" db:"logs"`
|
||||
ContainerID string `json:"container_id,omitempty" db:"container_id"`
|
||||
ExecutorID string `json:"executor_id" db:"executor_id"`
|
||||
CreatedAt time.Time `json:"created_at" db:"created_at"`
|
||||
StartedAt *time.Time `json:"started_at,omitempty" db:"started_at"`
|
||||
CompletedAt *time.Time `json:"completed_at,omitempty" db:"completed_at"`
|
||||
}
|
||||
|
||||
// CreateFunctionRequest represents a request to create a new function
|
||||
type CreateFunctionRequest struct {
|
||||
Name string `json:"name" validate:"required,min=1,max=255"`
|
||||
AppID string `json:"app_id" validate:"required"`
|
||||
Runtime RuntimeType `json:"runtime" validate:"required"`
|
||||
Image string `json:"image" validate:"required"`
|
||||
Handler string `json:"handler" validate:"required"`
|
||||
Code string `json:"code,omitempty"`
|
||||
Environment map[string]string `json:"environment,omitempty"`
|
||||
Timeout Duration `json:"timeout" validate:"required"`
|
||||
Memory int `json:"memory" validate:"required,min=64,max=3008"`
|
||||
Owner Owner `json:"owner" validate:"required"`
|
||||
}
|
||||
|
||||
// UpdateFunctionRequest represents a request to update an existing function
|
||||
type UpdateFunctionRequest struct {
|
||||
Name *string `json:"name,omitempty" validate:"omitempty,min=1,max=255"`
|
||||
Runtime *RuntimeType `json:"runtime,omitempty"`
|
||||
Image *string `json:"image,omitempty"`
|
||||
Handler *string `json:"handler,omitempty"`
|
||||
Code *string `json:"code,omitempty"`
|
||||
Environment map[string]string `json:"environment,omitempty"`
|
||||
Timeout *Duration `json:"timeout,omitempty"`
|
||||
Memory *int `json:"memory,omitempty" validate:"omitempty,min=64,max=3008"`
|
||||
Owner *Owner `json:"owner,omitempty"`
|
||||
}
|
||||
|
||||
// ExecuteFunctionRequest represents a request to execute a function
|
||||
type ExecuteFunctionRequest struct {
|
||||
FunctionID uuid.UUID `json:"function_id" validate:"required"`
|
||||
Input json.RawMessage `json:"input,omitempty"`
|
||||
Async bool `json:"async,omitempty"`
|
||||
}
|
||||
|
||||
// ExecuteFunctionResponse represents a response for function execution
|
||||
type ExecuteFunctionResponse struct {
|
||||
ExecutionID uuid.UUID `json:"execution_id"`
|
||||
Status ExecutionStatus `json:"status"`
|
||||
Output json.RawMessage `json:"output,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Duration time.Duration `json:"duration,omitempty"`
|
||||
MemoryUsed int `json:"memory_used,omitempty"`
|
||||
}
|
||||
|
||||
// DeployFunctionRequest represents a request to deploy a function
|
||||
type DeployFunctionRequest struct {
|
||||
FunctionID uuid.UUID `json:"function_id" validate:"required"`
|
||||
Force bool `json:"force,omitempty"`
|
||||
}
|
||||
|
||||
// DeployFunctionResponse represents a response for function deployment
|
||||
type DeployFunctionResponse struct {
|
||||
Status string `json:"status"`
|
||||
Message string `json:"message,omitempty"`
|
||||
Image string `json:"image,omitempty"`
|
||||
ImageID string `json:"image_id,omitempty"`
|
||||
}
|
||||
|
||||
// RuntimeInfo represents runtime information
|
||||
type RuntimeInfo struct {
|
||||
Type RuntimeType `json:"type"`
|
||||
Version string `json:"version"`
|
||||
Available bool `json:"available"`
|
||||
DefaultImage string `json:"default_image"`
|
||||
Description string `json:"description"`
|
||||
}
|
||||
|
||||
// ExecutionResult contains function execution results
|
||||
type ExecutionResult struct {
|
||||
Output json.RawMessage `json:"output,omitempty"`
|
||||
Error string `json:"error,omitempty"`
|
||||
Duration time.Duration `json:"duration"`
|
||||
MemoryUsed int `json:"memory_used"`
|
||||
Logs []string `json:"logs,omitempty"`
|
||||
}
|
||||
|
||||
// AuthContext represents the authentication context for a request
|
||||
type AuthContext struct {
|
||||
UserID string `json:"user_id"`
|
||||
AppID string `json:"app_id"`
|
||||
Permissions []string `json:"permissions"`
|
||||
Claims map[string]string `json:"claims"`
|
||||
}
|
||||
@ -1,70 +0,0 @@
|
||||
package domain
|
||||
|
||||
// RuntimeConfig defines the configuration for each runtime
|
||||
type RuntimeConfig struct {
|
||||
Name string `json:"name"`
|
||||
DisplayName string `json:"display_name"`
|
||||
Image string `json:"image"`
|
||||
Handler string `json:"default_handler"`
|
||||
Extensions []string `json:"file_extensions"`
|
||||
Environment map[string]string `json:"default_environment"`
|
||||
}
|
||||
|
||||
// GetRuntimeConfigs returns the available runtime configurations
|
||||
func GetRuntimeConfigs() map[RuntimeType]RuntimeConfig {
|
||||
return map[RuntimeType]RuntimeConfig{
|
||||
"nodejs18": {
|
||||
Name: "nodejs18",
|
||||
DisplayName: "Node.js 18.x",
|
||||
Image: "node:18-alpine",
|
||||
Handler: "index.handler",
|
||||
Extensions: []string{".js", ".mjs", ".ts"},
|
||||
Environment: map[string]string{
|
||||
"NODE_ENV": "production",
|
||||
},
|
||||
},
|
||||
"python3.9": {
|
||||
Name: "python3.9",
|
||||
DisplayName: "Python 3.9",
|
||||
Image: "python:3.9-alpine",
|
||||
Handler: "main.handler",
|
||||
Extensions: []string{".py"},
|
||||
Environment: map[string]string{
|
||||
"PYTHONPATH": "/app",
|
||||
},
|
||||
},
|
||||
"go1.20": {
|
||||
Name: "go1.20",
|
||||
DisplayName: "Go 1.20",
|
||||
Image: "golang:1.20-alpine",
|
||||
Handler: "main.Handler",
|
||||
Extensions: []string{".go"},
|
||||
Environment: map[string]string{
|
||||
"CGO_ENABLED": "0",
|
||||
"GOOS": "linux",
|
||||
},
|
||||
},
|
||||
}
|
||||
}
|
||||
|
||||
// GetRuntimeConfig returns the configuration for a specific runtime
|
||||
func GetRuntimeConfig(runtime RuntimeType) (RuntimeConfig, bool) {
|
||||
configs := GetRuntimeConfigs()
|
||||
config, exists := configs[runtime]
|
||||
return config, exists
|
||||
}
|
||||
|
||||
// GetAvailableRuntimes returns a list of available runtimes for the frontend
|
||||
func GetAvailableRuntimes() []map[string]string {
|
||||
configs := GetRuntimeConfigs()
|
||||
var runtimes []map[string]string
|
||||
|
||||
for _, config := range configs {
|
||||
runtimes = append(runtimes, map[string]string{
|
||||
"value": config.Name,
|
||||
"label": config.DisplayName,
|
||||
})
|
||||
}
|
||||
|
||||
return runtimes
|
||||
}
|
||||
@ -1,277 +0,0 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/uuid"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/services"
|
||||
)
|
||||
|
||||
type ExecutionHandler struct {
|
||||
executionService services.ExecutionService
|
||||
authService services.AuthService
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewExecutionHandler(executionService services.ExecutionService, authService services.AuthService, logger *zap.Logger) *ExecutionHandler {
|
||||
return &ExecutionHandler{
|
||||
executionService: executionService,
|
||||
authService: authService,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *ExecutionHandler) Execute(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
functionID, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid function ID"})
|
||||
return
|
||||
}
|
||||
|
||||
var req domain.ExecuteFunctionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Warn("Invalid execute function request", zap.Error(err))
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid request format"})
|
||||
return
|
||||
}
|
||||
req.FunctionID = functionID
|
||||
|
||||
authCtx, err := h.authService.GetAuthContext(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get auth context", zap.Error(err))
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "Authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.execute") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
response, err := h.executionService.Execute(c.Request.Context(), &req, authCtx.UserID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to execute function", zap.String("function_id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Function execution initiated",
|
||||
zap.String("function_id", functionID.String()),
|
||||
zap.String("execution_id", response.ExecutionID.String()),
|
||||
zap.String("user_id", authCtx.UserID),
|
||||
zap.Bool("async", req.Async))
|
||||
|
||||
c.JSON(http.StatusOK, response)
|
||||
}
|
||||
|
||||
func (h *ExecutionHandler) Invoke(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
functionID, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid function ID"})
|
||||
return
|
||||
}
|
||||
|
||||
var req domain.ExecuteFunctionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
// Allow empty body
|
||||
req = domain.ExecuteFunctionRequest{
|
||||
FunctionID: functionID,
|
||||
Async: true,
|
||||
}
|
||||
}
|
||||
req.FunctionID = functionID
|
||||
req.Async = true // Force async for invoke endpoint
|
||||
|
||||
authCtx, err := h.authService.GetAuthContext(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get auth context", zap.Error(err))
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "Authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.execute") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
response, err := h.executionService.Execute(c.Request.Context(), &req, authCtx.UserID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to invoke function", zap.String("function_id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Function invoked successfully",
|
||||
zap.String("function_id", functionID.String()),
|
||||
zap.String("execution_id", response.ExecutionID.String()),
|
||||
zap.String("user_id", authCtx.UserID))
|
||||
|
||||
c.JSON(http.StatusAccepted, response)
|
||||
}
|
||||
|
||||
func (h *ExecutionHandler) GetByID(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
id, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid execution ID"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.read") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
execution, err := h.executionService.GetByID(c.Request.Context(), id)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get execution", zap.String("id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "Execution not found"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, execution)
|
||||
}
|
||||
|
||||
func (h *ExecutionHandler) List(c *gin.Context) {
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.read") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
var functionID *uuid.UUID
|
||||
functionIDStr := c.Query("function_id")
|
||||
if functionIDStr != "" {
|
||||
id, err := uuid.Parse(functionIDStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid function ID"})
|
||||
return
|
||||
}
|
||||
functionID = &id
|
||||
}
|
||||
|
||||
limitStr := c.DefaultQuery("limit", "50")
|
||||
offsetStr := c.DefaultQuery("offset", "0")
|
||||
|
||||
limit, err := strconv.Atoi(limitStr)
|
||||
if err != nil || limit <= 0 {
|
||||
limit = 50
|
||||
}
|
||||
|
||||
offset, err := strconv.Atoi(offsetStr)
|
||||
if err != nil || offset < 0 {
|
||||
offset = 0
|
||||
}
|
||||
|
||||
executions, err := h.executionService.List(c.Request.Context(), functionID, limit, offset)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list executions", zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"executions": executions,
|
||||
"limit": limit,
|
||||
"offset": offset,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *ExecutionHandler) Cancel(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
id, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid execution ID"})
|
||||
return
|
||||
}
|
||||
|
||||
authCtx, err := h.authService.GetAuthContext(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get auth context", zap.Error(err))
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "Authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.execute") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.executionService.Cancel(c.Request.Context(), id, authCtx.UserID); err != nil {
|
||||
h.logger.Error("Failed to cancel execution", zap.String("id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Execution canceled successfully",
|
||||
zap.String("execution_id", id.String()),
|
||||
zap.String("user_id", authCtx.UserID))
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Execution canceled successfully"})
|
||||
}
|
||||
|
||||
func (h *ExecutionHandler) GetLogs(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
h.logger.Debug("GetLogs endpoint called",
|
||||
zap.String("execution_id", idStr),
|
||||
zap.String("client_ip", c.ClientIP()))
|
||||
|
||||
id, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
h.logger.Warn("Invalid execution ID provided to GetLogs",
|
||||
zap.String("id", idStr),
|
||||
zap.Error(err))
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid execution ID"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.read") {
|
||||
h.logger.Warn("Insufficient permissions for GetLogs",
|
||||
zap.String("execution_id", idStr))
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Debug("Calling execution service GetLogs",
|
||||
zap.String("execution_id", idStr))
|
||||
|
||||
logs, err := h.executionService.GetLogs(c.Request.Context(), id)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get execution logs", zap.String("id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Debug("Successfully retrieved logs from execution service",
|
||||
zap.String("execution_id", idStr),
|
||||
zap.Int("log_count", len(logs)))
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"logs": logs,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *ExecutionHandler) GetRunning(c *gin.Context) {
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.read") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
executions, err := h.executionService.GetRunningExecutions(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get running executions", zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"executions": executions,
|
||||
"count": len(executions),
|
||||
})
|
||||
}
|
||||
@ -1,244 +0,0 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"net/http"
|
||||
"strconv"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"github.com/google/uuid"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/services"
|
||||
)
|
||||
|
||||
type FunctionHandler struct {
|
||||
functionService services.FunctionService
|
||||
authService services.AuthService
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewFunctionHandler(functionService services.FunctionService, authService services.AuthService, logger *zap.Logger) *FunctionHandler {
|
||||
return &FunctionHandler{
|
||||
functionService: functionService,
|
||||
authService: authService,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *FunctionHandler) Create(c *gin.Context) {
|
||||
var req domain.CreateFunctionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Warn("Invalid create function request", zap.Error(err))
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid request format"})
|
||||
return
|
||||
}
|
||||
|
||||
// Auto-select image based on runtime if not provided or empty
|
||||
if req.Image == "" {
|
||||
if runtimeConfig, exists := domain.GetRuntimeConfig(req.Runtime); exists && runtimeConfig.Image != "" {
|
||||
req.Image = runtimeConfig.Image
|
||||
}
|
||||
}
|
||||
|
||||
authCtx, err := h.authService.GetAuthContext(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get auth context", zap.Error(err))
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "Authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.write") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
function, err := h.functionService.Create(c.Request.Context(), &req, authCtx.UserID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to create function", zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Function created successfully",
|
||||
zap.String("function_id", function.ID.String()),
|
||||
zap.String("name", function.Name),
|
||||
zap.String("user_id", authCtx.UserID))
|
||||
|
||||
c.JSON(http.StatusCreated, function)
|
||||
}
|
||||
|
||||
func (h *FunctionHandler) GetByID(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
id, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid function ID"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.read") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
function, err := h.functionService.GetByID(c.Request.Context(), id)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get function", zap.String("id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusNotFound, gin.H{"error": "Function not found"})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, function)
|
||||
}
|
||||
|
||||
func (h *FunctionHandler) Update(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
id, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid function ID"})
|
||||
return
|
||||
}
|
||||
|
||||
var req domain.UpdateFunctionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
h.logger.Warn("Invalid update function request", zap.Error(err))
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid request format"})
|
||||
return
|
||||
}
|
||||
|
||||
authCtx, err := h.authService.GetAuthContext(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get auth context", zap.Error(err))
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "Authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.write") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
function, err := h.functionService.Update(c.Request.Context(), id, &req, authCtx.UserID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to update function", zap.String("id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Function updated successfully",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.String("user_id", authCtx.UserID))
|
||||
|
||||
c.JSON(http.StatusOK, function)
|
||||
}
|
||||
|
||||
func (h *FunctionHandler) Delete(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
id, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid function ID"})
|
||||
return
|
||||
}
|
||||
|
||||
authCtx, err := h.authService.GetAuthContext(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get auth context", zap.Error(err))
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "Authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.delete") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
if err := h.functionService.Delete(c.Request.Context(), id, authCtx.UserID); err != nil {
|
||||
h.logger.Error("Failed to delete function", zap.String("id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Function deleted successfully",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.String("user_id", authCtx.UserID))
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{"message": "Function deleted successfully"})
|
||||
}
|
||||
|
||||
func (h *FunctionHandler) List(c *gin.Context) {
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.read") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
appID := c.Query("app_id")
|
||||
limitStr := c.DefaultQuery("limit", "50")
|
||||
offsetStr := c.DefaultQuery("offset", "0")
|
||||
|
||||
limit, err := strconv.Atoi(limitStr)
|
||||
if err != nil || limit <= 0 {
|
||||
limit = 50
|
||||
}
|
||||
|
||||
offset, err := strconv.Atoi(offsetStr)
|
||||
if err != nil || offset < 0 {
|
||||
offset = 0
|
||||
}
|
||||
|
||||
functions, err := h.functionService.List(c.Request.Context(), appID, limit, offset)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to list functions", zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, gin.H{
|
||||
"functions": functions,
|
||||
"limit": limit,
|
||||
"offset": offset,
|
||||
})
|
||||
}
|
||||
|
||||
func (h *FunctionHandler) Deploy(c *gin.Context) {
|
||||
idStr := c.Param("id")
|
||||
id, err := uuid.Parse(idStr)
|
||||
if err != nil {
|
||||
c.JSON(http.StatusBadRequest, gin.H{"error": "Invalid function ID"})
|
||||
return
|
||||
}
|
||||
|
||||
var req domain.DeployFunctionRequest
|
||||
if err := c.ShouldBindJSON(&req); err != nil {
|
||||
// Allow empty body for deploy
|
||||
req = domain.DeployFunctionRequest{
|
||||
FunctionID: id,
|
||||
Force: false,
|
||||
}
|
||||
}
|
||||
req.FunctionID = id
|
||||
|
||||
authCtx, err := h.authService.GetAuthContext(c.Request.Context())
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to get auth context", zap.Error(err))
|
||||
c.JSON(http.StatusUnauthorized, gin.H{"error": "Authentication required"})
|
||||
return
|
||||
}
|
||||
|
||||
if !h.authService.HasPermission(c.Request.Context(), "faas.deploy") {
|
||||
c.JSON(http.StatusForbidden, gin.H{"error": "Insufficient permissions"})
|
||||
return
|
||||
}
|
||||
|
||||
response, err := h.functionService.Deploy(c.Request.Context(), id, &req, authCtx.UserID)
|
||||
if err != nil {
|
||||
h.logger.Error("Failed to deploy function", zap.String("id", idStr), zap.Error(err))
|
||||
c.JSON(http.StatusInternalServerError, gin.H{"error": err.Error()})
|
||||
return
|
||||
}
|
||||
|
||||
h.logger.Info("Function deployed successfully",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.String("user_id", authCtx.UserID))
|
||||
|
||||
c.JSON(http.StatusOK, response)
|
||||
}
|
||||
@ -1,70 +0,0 @@
|
||||
package handlers
|
||||
|
||||
import (
|
||||
"database/sql"
|
||||
"net/http"
|
||||
"time"
|
||||
|
||||
"github.com/gin-gonic/gin"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
type HealthHandler struct {
|
||||
db *sql.DB
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewHealthHandler(db *sql.DB, logger *zap.Logger) *HealthHandler {
|
||||
return &HealthHandler{
|
||||
db: db,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (h *HealthHandler) Health(c *gin.Context) {
|
||||
h.logger.Debug("Health check requested")
|
||||
|
||||
response := gin.H{
|
||||
"status": "healthy",
|
||||
"service": "faas",
|
||||
"timestamp": time.Now().UTC().Format(time.RFC3339),
|
||||
"version": "1.0.0",
|
||||
}
|
||||
|
||||
c.JSON(http.StatusOK, response)
|
||||
}
|
||||
|
||||
func (h *HealthHandler) Ready(c *gin.Context) {
|
||||
h.logger.Debug("Readiness check requested")
|
||||
|
||||
checks := make(map[string]interface{})
|
||||
overall := "ready"
|
||||
|
||||
// Check database connection
|
||||
if err := h.db.Ping(); err != nil {
|
||||
h.logger.Error("Database health check failed", zap.Error(err))
|
||||
checks["database"] = gin.H{
|
||||
"status": "unhealthy",
|
||||
"error": err.Error(),
|
||||
}
|
||||
overall = "not ready"
|
||||
} else {
|
||||
checks["database"] = gin.H{
|
||||
"status": "healthy",
|
||||
}
|
||||
}
|
||||
|
||||
response := gin.H{
|
||||
"status": overall,
|
||||
"service": "faas",
|
||||
"timestamp": time.Now().UTC().Format(time.RFC3339),
|
||||
"checks": checks,
|
||||
}
|
||||
|
||||
statusCode := http.StatusOK
|
||||
if overall != "ready" {
|
||||
statusCode = http.StatusServiceUnavailable
|
||||
}
|
||||
|
||||
c.JSON(statusCode, response)
|
||||
}
|
||||
@ -1,32 +0,0 @@
|
||||
package repository
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// FunctionRepository provides CRUD operations for functions
|
||||
type FunctionRepository interface {
|
||||
Create(ctx context.Context, function *domain.FunctionDefinition) (*domain.FunctionDefinition, error)
|
||||
GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionDefinition, error)
|
||||
GetByName(ctx context.Context, appID, name string) (*domain.FunctionDefinition, error)
|
||||
Update(ctx context.Context, id uuid.UUID, updates *domain.UpdateFunctionRequest) (*domain.FunctionDefinition, error)
|
||||
Delete(ctx context.Context, id uuid.UUID) error
|
||||
List(ctx context.Context, appID string, limit, offset int) ([]*domain.FunctionDefinition, error)
|
||||
GetByAppID(ctx context.Context, appID string) ([]*domain.FunctionDefinition, error)
|
||||
}
|
||||
|
||||
// ExecutionRepository provides CRUD operations for function executions
|
||||
type ExecutionRepository interface {
|
||||
Create(ctx context.Context, execution *domain.FunctionExecution) (*domain.FunctionExecution, error)
|
||||
GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionExecution, error)
|
||||
Update(ctx context.Context, id uuid.UUID, execution *domain.FunctionExecution) (*domain.FunctionExecution, error)
|
||||
Delete(ctx context.Context, id uuid.UUID) error
|
||||
List(ctx context.Context, functionID *uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error)
|
||||
GetByFunctionID(ctx context.Context, functionID uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error)
|
||||
GetByStatus(ctx context.Context, status domain.ExecutionStatus, limit, offset int) ([]*domain.FunctionExecution, error)
|
||||
UpdateStatus(ctx context.Context, id uuid.UUID, status domain.ExecutionStatus) error
|
||||
GetRunningExecutions(ctx context.Context) ([]*domain.FunctionExecution, error)
|
||||
}
|
||||
@ -1,321 +0,0 @@
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"github.com/lib/pq"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/repository"
|
||||
)
|
||||
|
||||
type executionRepository struct {
|
||||
db *sql.DB
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
// Helper function to convert time.Duration to PostgreSQL interval
|
||||
func durationToInterval(d time.Duration) interface{} {
|
||||
if d == 0 {
|
||||
return nil
|
||||
}
|
||||
// Convert nanoseconds to PostgreSQL interval format
|
||||
seconds := float64(d) / float64(time.Second)
|
||||
return fmt.Sprintf("%.9f seconds", seconds)
|
||||
}
|
||||
|
||||
// Helper function to convert PostgreSQL interval to time.Duration
|
||||
func intervalToDuration(interval interface{}) (time.Duration, error) {
|
||||
if interval == nil {
|
||||
return 0, nil
|
||||
}
|
||||
|
||||
switch v := interval.(type) {
|
||||
case string:
|
||||
if v == "" {
|
||||
return 0, nil
|
||||
}
|
||||
// Try to parse as PostgreSQL interval
|
||||
// For now, we'll use a simple approach - parse common formats
|
||||
duration, err := time.ParseDuration(v)
|
||||
if err == nil {
|
||||
return duration, nil
|
||||
}
|
||||
// Handle PostgreSQL interval format like "00:00:05.123456"
|
||||
var hours, minutes int
|
||||
var seconds float64
|
||||
if n, err := fmt.Sscanf(v, "%d:%d:%f", &hours, &minutes, &seconds); n == 3 && err == nil {
|
||||
return time.Duration(hours)*time.Hour + time.Duration(minutes)*time.Minute + time.Duration(seconds*float64(time.Second)), nil
|
||||
}
|
||||
return 0, fmt.Errorf("unable to parse interval: %s", v)
|
||||
case []byte:
|
||||
return intervalToDuration(string(v))
|
||||
default:
|
||||
return 0, fmt.Errorf("unexpected interval type: %T", interval)
|
||||
}
|
||||
}
|
||||
|
||||
// Helper function to handle JSON fields
|
||||
func jsonField(data json.RawMessage) interface{} {
|
||||
if len(data) == 0 || data == nil {
|
||||
return "{}" // Return empty JSON string instead of nil or RawMessage
|
||||
}
|
||||
return string(data) // Convert RawMessage to string for database operations
|
||||
}
|
||||
|
||||
func NewExecutionRepository(db *sql.DB, logger *zap.Logger) repository.ExecutionRepository {
|
||||
return &executionRepository{
|
||||
db: db,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *executionRepository) Create(ctx context.Context, execution *domain.FunctionExecution) (*domain.FunctionExecution, error) {
|
||||
query := `
|
||||
INSERT INTO executions (id, function_id, status, input, executor_id, created_at)
|
||||
VALUES ($1, $2, $3, $4, $5, $6)
|
||||
RETURNING created_at`
|
||||
|
||||
err := r.db.QueryRowContext(ctx, query,
|
||||
execution.ID, execution.FunctionID, execution.Status, jsonField(execution.Input),
|
||||
execution.ExecutorID, execution.CreatedAt,
|
||||
).Scan(&execution.CreatedAt)
|
||||
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to create execution", zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to create execution: %w", err)
|
||||
}
|
||||
|
||||
return execution, nil
|
||||
}
|
||||
|
||||
func (r *executionRepository) GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionExecution, error) {
|
||||
query := `
|
||||
SELECT id, function_id, status, input, output, error, duration, memory_used,
|
||||
logs, container_id, executor_id, created_at, started_at, completed_at
|
||||
FROM executions WHERE id = $1`
|
||||
|
||||
execution := &domain.FunctionExecution{}
|
||||
var durationInterval sql.NullString
|
||||
|
||||
err := r.db.QueryRowContext(ctx, query, id).Scan(
|
||||
&execution.ID, &execution.FunctionID, &execution.Status, &execution.Input,
|
||||
&execution.Output, &execution.Error, &durationInterval, &execution.MemoryUsed,
|
||||
pq.Array(&execution.Logs), &execution.ContainerID, &execution.ExecutorID, &execution.CreatedAt,
|
||||
&execution.StartedAt, &execution.CompletedAt,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("execution not found")
|
||||
}
|
||||
r.logger.Error("Failed to get execution by ID", zap.String("id", id.String()), zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to get execution: %w", err)
|
||||
}
|
||||
|
||||
// Convert duration from PostgreSQL interval
|
||||
if durationInterval.Valid {
|
||||
duration, err := intervalToDuration(durationInterval.String)
|
||||
if err != nil {
|
||||
r.logger.Warn("Failed to parse duration interval", zap.String("interval", durationInterval.String), zap.Error(err))
|
||||
} else {
|
||||
execution.Duration = duration
|
||||
}
|
||||
}
|
||||
|
||||
return execution, nil
|
||||
}
|
||||
|
||||
func (r *executionRepository) Update(ctx context.Context, id uuid.UUID, execution *domain.FunctionExecution) (*domain.FunctionExecution, error) {
|
||||
query := `
|
||||
UPDATE executions
|
||||
SET status = $2, output = $3, error = $4, duration = $5, memory_used = $6,
|
||||
logs = $7, container_id = $8, started_at = $9, completed_at = $10
|
||||
WHERE id = $1`
|
||||
|
||||
_, err := r.db.ExecContext(ctx, query,
|
||||
id, execution.Status, jsonField(execution.Output), execution.Error,
|
||||
durationToInterval(execution.Duration), execution.MemoryUsed,
|
||||
pq.Array(execution.Logs), execution.ContainerID,
|
||||
execution.StartedAt, execution.CompletedAt,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to update execution", zap.String("id", id.String()), zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to update execution: %w", err)
|
||||
}
|
||||
|
||||
// Return updated execution
|
||||
return r.GetByID(ctx, id)
|
||||
}
|
||||
|
||||
func (r *executionRepository) Delete(ctx context.Context, id uuid.UUID) error {
|
||||
query := `DELETE FROM executions WHERE id = $1`
|
||||
|
||||
result, err := r.db.ExecContext(ctx, query, id)
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to delete execution", zap.String("id", id.String()), zap.Error(err))
|
||||
return fmt.Errorf("failed to delete execution: %w", err)
|
||||
}
|
||||
|
||||
rowsAffected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get affected rows: %w", err)
|
||||
}
|
||||
|
||||
if rowsAffected == 0 {
|
||||
return fmt.Errorf("execution not found")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *executionRepository) List(ctx context.Context, functionID *uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error) {
|
||||
var query string
|
||||
var args []interface{}
|
||||
|
||||
if functionID != nil {
|
||||
query = `
|
||||
SELECT id, function_id, status, input, output, error, duration, memory_used,
|
||||
container_id, executor_id, created_at, started_at, completed_at
|
||||
FROM executions WHERE function_id = $1
|
||||
ORDER BY created_at DESC LIMIT $2 OFFSET $3`
|
||||
args = []interface{}{*functionID, limit, offset}
|
||||
} else {
|
||||
query = `
|
||||
SELECT id, function_id, status, input, output, error, duration, memory_used,
|
||||
container_id, executor_id, created_at, started_at, completed_at
|
||||
FROM executions
|
||||
ORDER BY created_at DESC LIMIT $1 OFFSET $2`
|
||||
args = []interface{}{limit, offset}
|
||||
}
|
||||
|
||||
rows, err := r.db.QueryContext(ctx, query, args...)
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to list executions", zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to list executions: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var executions []*domain.FunctionExecution
|
||||
for rows.Next() {
|
||||
execution := &domain.FunctionExecution{}
|
||||
var durationInterval sql.NullString
|
||||
|
||||
err := rows.Scan(
|
||||
&execution.ID, &execution.FunctionID, &execution.Status, &execution.Input,
|
||||
&execution.Output, &execution.Error, &durationInterval, &execution.MemoryUsed,
|
||||
pq.Array(&execution.Logs), &execution.ContainerID, &execution.ExecutorID, &execution.CreatedAt,
|
||||
&execution.StartedAt, &execution.CompletedAt,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to scan execution", zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to scan execution: %w", err)
|
||||
}
|
||||
|
||||
// Convert duration from PostgreSQL interval
|
||||
if durationInterval.Valid {
|
||||
duration, err := intervalToDuration(durationInterval.String)
|
||||
if err != nil {
|
||||
r.logger.Warn("Failed to parse duration interval", zap.String("interval", durationInterval.String), zap.Error(err))
|
||||
} else {
|
||||
execution.Duration = duration
|
||||
}
|
||||
}
|
||||
|
||||
executions = append(executions, execution)
|
||||
}
|
||||
|
||||
if err = rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("failed to iterate executions: %w", err)
|
||||
}
|
||||
|
||||
return executions, nil
|
||||
}
|
||||
|
||||
func (r *executionRepository) GetByFunctionID(ctx context.Context, functionID uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error) {
|
||||
return r.List(ctx, &functionID, limit, offset)
|
||||
}
|
||||
|
||||
func (r *executionRepository) GetByStatus(ctx context.Context, status domain.ExecutionStatus, limit, offset int) ([]*domain.FunctionExecution, error) {
|
||||
query := `
|
||||
SELECT id, function_id, status, input, output, error, duration, memory_used,
|
||||
logs, container_id, executor_id, created_at, started_at, completed_at
|
||||
FROM executions WHERE status = $1
|
||||
ORDER BY created_at DESC LIMIT $2 OFFSET $3`
|
||||
|
||||
rows, err := r.db.QueryContext(ctx, query, status, limit, offset)
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to get executions by status", zap.String("status", string(status)), zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to get executions by status: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var executions []*domain.FunctionExecution
|
||||
for rows.Next() {
|
||||
execution := &domain.FunctionExecution{}
|
||||
var durationInterval sql.NullString
|
||||
|
||||
err := rows.Scan(
|
||||
&execution.ID, &execution.FunctionID, &execution.Status, &execution.Input,
|
||||
&execution.Output, &execution.Error, &durationInterval, &execution.MemoryUsed,
|
||||
pq.Array(&execution.Logs), &execution.ContainerID, &execution.ExecutorID, &execution.CreatedAt,
|
||||
&execution.StartedAt, &execution.CompletedAt,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to scan execution", zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to scan execution: %w", err)
|
||||
}
|
||||
|
||||
// Convert duration from PostgreSQL interval
|
||||
if durationInterval.Valid {
|
||||
duration, err := intervalToDuration(durationInterval.String)
|
||||
if err != nil {
|
||||
r.logger.Warn("Failed to parse duration interval", zap.String("interval", durationInterval.String), zap.Error(err))
|
||||
} else {
|
||||
execution.Duration = duration
|
||||
}
|
||||
}
|
||||
|
||||
executions = append(executions, execution)
|
||||
}
|
||||
|
||||
if err = rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("failed to iterate executions: %w", err)
|
||||
}
|
||||
|
||||
return executions, nil
|
||||
}
|
||||
|
||||
func (r *executionRepository) UpdateStatus(ctx context.Context, id uuid.UUID, status domain.ExecutionStatus) error {
|
||||
query := `UPDATE executions SET status = $2 WHERE id = $1`
|
||||
|
||||
result, err := r.db.ExecContext(ctx, query, id, status)
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to update execution status", zap.String("id", id.String()), zap.Error(err))
|
||||
return fmt.Errorf("failed to update execution status: %w", err)
|
||||
}
|
||||
|
||||
rowsAffected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get affected rows: %w", err)
|
||||
}
|
||||
|
||||
if rowsAffected == 0 {
|
||||
return fmt.Errorf("execution not found")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *executionRepository) GetRunningExecutions(ctx context.Context) ([]*domain.FunctionExecution, error) {
|
||||
return r.GetByStatus(ctx, domain.StatusRunning, 1000, 0)
|
||||
}
|
||||
@ -1,267 +0,0 @@
|
||||
package postgres
|
||||
|
||||
import (
|
||||
"context"
|
||||
"database/sql"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
|
||||
"github.com/google/uuid"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/repository"
|
||||
)
|
||||
|
||||
type functionRepository struct {
|
||||
db *sql.DB
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewFunctionRepository(db *sql.DB, logger *zap.Logger) repository.FunctionRepository {
|
||||
return &functionRepository{
|
||||
db: db,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (r *functionRepository) Create(ctx context.Context, function *domain.FunctionDefinition) (*domain.FunctionDefinition, error) {
|
||||
envJSON, err := json.Marshal(function.Environment)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal environment: %w", err)
|
||||
}
|
||||
|
||||
query := `
|
||||
INSERT INTO functions (id, name, app_id, runtime, image, handler, code, environment, timeout, memory,
|
||||
owner_type, owner_name, owner_owner, created_at, updated_at)
|
||||
VALUES ($1, $2, $3, $4, $5, $6, $7, $8, $9, $10, $11, $12, $13, $14, $15)
|
||||
RETURNING created_at, updated_at`
|
||||
|
||||
timeoutValue, _ := function.Timeout.Value()
|
||||
err = r.db.QueryRowContext(ctx, query,
|
||||
function.ID, function.Name, function.AppID, function.Runtime, function.Image,
|
||||
function.Handler, function.Code, envJSON, timeoutValue,
|
||||
function.Memory, function.Owner.Type, function.Owner.Name, function.Owner.Owner,
|
||||
function.CreatedAt, function.UpdatedAt,
|
||||
).Scan(&function.CreatedAt, &function.UpdatedAt)
|
||||
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to create function", zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to create function: %w", err)
|
||||
}
|
||||
|
||||
return function, nil
|
||||
}
|
||||
|
||||
func (r *functionRepository) GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionDefinition, error) {
|
||||
query := `
|
||||
SELECT id, name, app_id, runtime, image, handler, code, environment, timeout, memory,
|
||||
owner_type, owner_name, owner_owner, created_at, updated_at
|
||||
FROM functions WHERE id = $1`
|
||||
|
||||
function := &domain.FunctionDefinition{}
|
||||
var envJSON []byte
|
||||
|
||||
err := r.db.QueryRowContext(ctx, query, id).Scan(
|
||||
&function.ID, &function.Name, &function.AppID, &function.Runtime, &function.Image,
|
||||
&function.Handler, &function.Code, &envJSON, &function.Timeout, &function.Memory,
|
||||
&function.Owner.Type, &function.Owner.Name, &function.Owner.Owner,
|
||||
&function.CreatedAt, &function.UpdatedAt,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("function not found")
|
||||
}
|
||||
r.logger.Error("Failed to get function by ID", zap.String("id", id.String()), zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to get function: %w", err)
|
||||
}
|
||||
|
||||
// Unmarshal environment
|
||||
if err := json.Unmarshal(envJSON, &function.Environment); err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal environment: %w", err)
|
||||
}
|
||||
|
||||
return function, nil
|
||||
}
|
||||
|
||||
func (r *functionRepository) GetByName(ctx context.Context, appID, name string) (*domain.FunctionDefinition, error) {
|
||||
query := `
|
||||
SELECT id, name, app_id, runtime, image, handler, code, environment, timeout, memory,
|
||||
owner_type, owner_name, owner_owner, created_at, updated_at
|
||||
FROM functions WHERE app_id = $1 AND name = $2`
|
||||
|
||||
function := &domain.FunctionDefinition{}
|
||||
var envJSON []byte
|
||||
|
||||
err := r.db.QueryRowContext(ctx, query, appID, name).Scan(
|
||||
&function.ID, &function.Name, &function.AppID, &function.Runtime, &function.Image,
|
||||
&function.Handler, &function.Code, &envJSON, &function.Timeout, &function.Memory,
|
||||
&function.Owner.Type, &function.Owner.Name, &function.Owner.Owner,
|
||||
&function.CreatedAt, &function.UpdatedAt,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
if err == sql.ErrNoRows {
|
||||
return nil, fmt.Errorf("function not found")
|
||||
}
|
||||
r.logger.Error("Failed to get function by name", zap.String("app_id", appID), zap.String("name", name), zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to get function: %w", err)
|
||||
}
|
||||
|
||||
// Unmarshal environment
|
||||
if err := json.Unmarshal(envJSON, &function.Environment); err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal environment: %w", err)
|
||||
}
|
||||
|
||||
return function, nil
|
||||
}
|
||||
|
||||
func (r *functionRepository) Update(ctx context.Context, id uuid.UUID, updates *domain.UpdateFunctionRequest) (*domain.FunctionDefinition, error) {
|
||||
// First get the current function
|
||||
current, err := r.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
|
||||
// Apply updates
|
||||
if updates.Name != nil {
|
||||
current.Name = *updates.Name
|
||||
}
|
||||
if updates.Runtime != nil {
|
||||
current.Runtime = *updates.Runtime
|
||||
}
|
||||
if updates.Image != nil {
|
||||
current.Image = *updates.Image
|
||||
}
|
||||
if updates.Handler != nil {
|
||||
current.Handler = *updates.Handler
|
||||
}
|
||||
if updates.Code != nil {
|
||||
current.Code = *updates.Code
|
||||
}
|
||||
if updates.Environment != nil {
|
||||
current.Environment = updates.Environment
|
||||
}
|
||||
if updates.Timeout != nil {
|
||||
current.Timeout = *updates.Timeout
|
||||
}
|
||||
if updates.Memory != nil {
|
||||
current.Memory = *updates.Memory
|
||||
}
|
||||
if updates.Owner != nil {
|
||||
current.Owner = *updates.Owner
|
||||
}
|
||||
|
||||
// Marshal environment
|
||||
envJSON, err := json.Marshal(current.Environment)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to marshal environment: %w", err)
|
||||
}
|
||||
|
||||
query := `
|
||||
UPDATE functions
|
||||
SET name = $2, runtime = $3, image = $4, handler = $5, code = $6, environment = $7,
|
||||
timeout = $8, memory = $9, owner_type = $10, owner_name = $11, owner_owner = $12,
|
||||
updated_at = CURRENT_TIMESTAMP
|
||||
WHERE id = $1
|
||||
RETURNING updated_at`
|
||||
|
||||
timeoutValue, _ := current.Timeout.Value()
|
||||
err = r.db.QueryRowContext(ctx, query,
|
||||
id, current.Name, current.Runtime, current.Image, current.Handler,
|
||||
current.Code, envJSON, timeoutValue, current.Memory,
|
||||
current.Owner.Type, current.Owner.Name, current.Owner.Owner,
|
||||
).Scan(¤t.UpdatedAt)
|
||||
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to update function", zap.String("id", id.String()), zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to update function: %w", err)
|
||||
}
|
||||
|
||||
return current, nil
|
||||
}
|
||||
|
||||
func (r *functionRepository) Delete(ctx context.Context, id uuid.UUID) error {
|
||||
query := `DELETE FROM functions WHERE id = $1`
|
||||
|
||||
result, err := r.db.ExecContext(ctx, query, id)
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to delete function", zap.String("id", id.String()), zap.Error(err))
|
||||
return fmt.Errorf("failed to delete function: %w", err)
|
||||
}
|
||||
|
||||
rowsAffected, err := result.RowsAffected()
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get affected rows: %w", err)
|
||||
}
|
||||
|
||||
if rowsAffected == 0 {
|
||||
return fmt.Errorf("function not found")
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (r *functionRepository) List(ctx context.Context, appID string, limit, offset int) ([]*domain.FunctionDefinition, error) {
|
||||
var query string
|
||||
var args []interface{}
|
||||
|
||||
if appID != "" {
|
||||
query = `
|
||||
SELECT id, name, app_id, runtime, image, handler, code, environment, timeout, memory,
|
||||
owner_type, owner_name, owner_owner, created_at, updated_at
|
||||
FROM functions WHERE app_id = $1
|
||||
ORDER BY created_at DESC LIMIT $2 OFFSET $3`
|
||||
args = []interface{}{appID, limit, offset}
|
||||
} else {
|
||||
query = `
|
||||
SELECT id, name, app_id, runtime, image, handler, code, environment, timeout, memory,
|
||||
owner_type, owner_name, owner_owner, created_at, updated_at
|
||||
FROM functions
|
||||
ORDER BY created_at DESC LIMIT $1 OFFSET $2`
|
||||
args = []interface{}{limit, offset}
|
||||
}
|
||||
|
||||
rows, err := r.db.QueryContext(ctx, query, args...)
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to list functions", zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to list functions: %w", err)
|
||||
}
|
||||
defer rows.Close()
|
||||
|
||||
var functions []*domain.FunctionDefinition
|
||||
for rows.Next() {
|
||||
function := &domain.FunctionDefinition{}
|
||||
var envJSON []byte
|
||||
|
||||
err := rows.Scan(
|
||||
&function.ID, &function.Name, &function.AppID, &function.Runtime, &function.Image,
|
||||
&function.Handler, &function.Code, &envJSON, &function.Timeout, &function.Memory,
|
||||
&function.Owner.Type, &function.Owner.Name, &function.Owner.Owner,
|
||||
&function.CreatedAt, &function.UpdatedAt,
|
||||
)
|
||||
|
||||
if err != nil {
|
||||
r.logger.Error("Failed to scan function", zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to scan function: %w", err)
|
||||
}
|
||||
|
||||
// Unmarshal environment
|
||||
if err := json.Unmarshal(envJSON, &function.Environment); err != nil {
|
||||
return nil, fmt.Errorf("failed to unmarshal environment: %w", err)
|
||||
}
|
||||
|
||||
functions = append(functions, function)
|
||||
}
|
||||
|
||||
if err = rows.Err(); err != nil {
|
||||
return nil, fmt.Errorf("failed to iterate functions: %w", err)
|
||||
}
|
||||
|
||||
return functions, nil
|
||||
}
|
||||
|
||||
func (r *functionRepository) GetByAppID(ctx context.Context, appID string) ([]*domain.FunctionDefinition, error) {
|
||||
return r.List(ctx, appID, 1000, 0) // Get all functions for the app
|
||||
}
|
||||
@ -1,431 +0,0 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/docker/api/types/filters"
|
||||
"github.com/docker/docker/api/types/image"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/google/uuid"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/runtime"
|
||||
)
|
||||
|
||||
type DockerRuntime struct {
|
||||
client *client.Client
|
||||
logger *zap.Logger
|
||||
config *Config
|
||||
}
|
||||
|
||||
type Config struct {
|
||||
DockerHost string `json:"docker_host"`
|
||||
NetworkMode string `json:"network_mode"`
|
||||
SecurityOpts []string `json:"security_opts"`
|
||||
DefaultLabels map[string]string `json:"default_labels"`
|
||||
MaxCPUs float64 `json:"max_cpus"`
|
||||
MaxMemory int64 `json:"max_memory"`
|
||||
TimeoutSeconds int `json:"timeout_seconds"`
|
||||
}
|
||||
|
||||
func NewDockerRuntime(logger *zap.Logger, cfg *Config) (*DockerRuntime, error) {
|
||||
if cfg == nil {
|
||||
cfg = &Config{
|
||||
NetworkMode: "bridge",
|
||||
SecurityOpts: []string{"no-new-privileges:true"},
|
||||
DefaultLabels: map[string]string{"service": "faas"},
|
||||
MaxCPUs: 2.0,
|
||||
MaxMemory: 512 * 1024 * 1024, // 512MB
|
||||
TimeoutSeconds: 300,
|
||||
}
|
||||
}
|
||||
|
||||
var cli *client.Client
|
||||
var err error
|
||||
|
||||
if cfg.DockerHost != "" {
|
||||
cli, err = client.NewClientWithOpts(client.WithHost(cfg.DockerHost))
|
||||
} else {
|
||||
cli, err = client.NewClientWithOpts(client.FromEnv)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create Docker client: %w", err)
|
||||
}
|
||||
|
||||
return &DockerRuntime{
|
||||
client: cli,
|
||||
logger: logger,
|
||||
config: cfg,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) Execute(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage) (*domain.ExecutionResult, error) {
|
||||
executionID := uuid.New()
|
||||
startTime := time.Now()
|
||||
|
||||
d.logger.Info("Starting function execution",
|
||||
zap.String("function_id", function.ID.String()),
|
||||
zap.String("execution_id", executionID.String()),
|
||||
zap.String("image", function.Image))
|
||||
|
||||
// Create container configuration
|
||||
containerConfig := &container.Config{
|
||||
Image: function.Image,
|
||||
Env: d.buildEnvironment(function, input),
|
||||
Labels: map[string]string{
|
||||
"faas.function_id": function.ID.String(),
|
||||
"faas.execution_id": executionID.String(),
|
||||
"faas.function_name": function.Name,
|
||||
},
|
||||
WorkingDir: "/app",
|
||||
Cmd: []string{function.Handler},
|
||||
}
|
||||
|
||||
// Add default labels
|
||||
for k, v := range d.config.DefaultLabels {
|
||||
containerConfig.Labels[k] = v
|
||||
}
|
||||
|
||||
// Create host configuration with resource limits
|
||||
hostConfig := &container.HostConfig{
|
||||
Resources: container.Resources{
|
||||
Memory: int64(function.Memory) * 1024 * 1024, // Convert MB to bytes
|
||||
CPUQuota: int64(d.config.MaxCPUs * 100000), // CPU quota in microseconds
|
||||
CPUPeriod: 100000, // CPU period in microseconds
|
||||
},
|
||||
NetworkMode: container.NetworkMode(d.config.NetworkMode),
|
||||
SecurityOpt: d.config.SecurityOpts,
|
||||
AutoRemove: true,
|
||||
}
|
||||
|
||||
// Create container
|
||||
resp, err := d.client.ContainerCreate(ctx, containerConfig, hostConfig, nil, nil, "")
|
||||
if err != nil {
|
||||
return &domain.ExecutionResult{
|
||||
Error: fmt.Sprintf("failed to create container: %v", err),
|
||||
Duration: time.Since(startTime),
|
||||
}, nil
|
||||
}
|
||||
|
||||
containerID := resp.ID
|
||||
|
||||
// Start container
|
||||
if err := d.client.ContainerStart(ctx, containerID, container.StartOptions{}); err != nil {
|
||||
return &domain.ExecutionResult{
|
||||
Error: fmt.Sprintf("failed to start container: %v", err),
|
||||
Duration: time.Since(startTime),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Wait for container to finish with timeout
|
||||
timeoutCtx, cancel := context.WithTimeout(ctx, function.Timeout.Duration)
|
||||
defer cancel()
|
||||
|
||||
statusCh, errCh := d.client.ContainerWait(timeoutCtx, containerID, container.WaitConditionNotRunning)
|
||||
|
||||
var waitResult container.WaitResponse
|
||||
select {
|
||||
case result := <-statusCh:
|
||||
waitResult = result
|
||||
case err := <-errCh:
|
||||
d.client.ContainerKill(ctx, containerID, "SIGTERM")
|
||||
return &domain.ExecutionResult{
|
||||
Error: fmt.Sprintf("container wait error: %v", err),
|
||||
Duration: time.Since(startTime),
|
||||
}, nil
|
||||
case <-timeoutCtx.Done():
|
||||
d.client.ContainerKill(ctx, containerID, "SIGTERM")
|
||||
return &domain.ExecutionResult{
|
||||
Error: "execution timeout",
|
||||
Duration: time.Since(startTime),
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Get container logs
|
||||
logs, err := d.getContainerLogs(ctx, containerID)
|
||||
if err != nil {
|
||||
d.logger.Warn("Failed to get container logs", zap.Error(err))
|
||||
}
|
||||
|
||||
// Get container stats for memory usage
|
||||
memoryUsed := d.getMemoryUsage(ctx, containerID)
|
||||
|
||||
duration := time.Since(startTime)
|
||||
|
||||
// Parse output from logs if successful
|
||||
var output json.RawMessage
|
||||
var execError string
|
||||
|
||||
if waitResult.StatusCode == 0 {
|
||||
// Extract output from logs (assuming last line contains JSON output)
|
||||
if len(logs) > 0 {
|
||||
lastLog := logs[len(logs)-1]
|
||||
if json.Valid([]byte(lastLog)) {
|
||||
output = json.RawMessage(lastLog)
|
||||
} else {
|
||||
output = json.RawMessage(fmt.Sprintf(`{"result": "%s"}`, lastLog))
|
||||
}
|
||||
}
|
||||
} else {
|
||||
execError = fmt.Sprintf("container exited with code %d", waitResult.StatusCode)
|
||||
if len(logs) > 0 {
|
||||
execError += ": " + strings.Join(logs, "\n")
|
||||
}
|
||||
}
|
||||
|
||||
d.logger.Info("Function execution completed",
|
||||
zap.String("function_id", function.ID.String()),
|
||||
zap.String("execution_id", executionID.String()),
|
||||
zap.Duration("duration", duration),
|
||||
zap.Int64("status_code", waitResult.StatusCode),
|
||||
zap.Int("memory_used", memoryUsed))
|
||||
|
||||
return &domain.ExecutionResult{
|
||||
Output: output,
|
||||
Error: execError,
|
||||
Duration: duration,
|
||||
MemoryUsed: memoryUsed,
|
||||
Logs: logs,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) Deploy(ctx context.Context, function *domain.FunctionDefinition) error {
|
||||
d.logger.Info("Deploying function",
|
||||
zap.String("function_id", function.ID.String()),
|
||||
zap.String("image", function.Image))
|
||||
|
||||
// Pull image
|
||||
reader, err := d.client.ImagePull(ctx, function.Image, image.PullOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to pull image %s: %w", function.Image, err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
// Read the pull response to ensure it completes
|
||||
_, err = io.ReadAll(reader)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to complete image pull: %w", err)
|
||||
}
|
||||
|
||||
d.logger.Info("Function deployed successfully",
|
||||
zap.String("function_id", function.ID.String()),
|
||||
zap.String("image", function.Image))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) Remove(ctx context.Context, functionID uuid.UUID) error {
|
||||
d.logger.Info("Removing function containers", zap.String("function_id", functionID.String()))
|
||||
|
||||
// List containers with the function label
|
||||
filters := filters.NewArgs()
|
||||
filters.Add("label", fmt.Sprintf("faas.function_id=%s", functionID.String()))
|
||||
containers, err := d.client.ContainerList(ctx, container.ListOptions{
|
||||
All: true,
|
||||
Filters: filters,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to list containers: %w", err)
|
||||
}
|
||||
|
||||
// Remove containers
|
||||
for _, container := range containers {
|
||||
if err := d.client.ContainerRemove(ctx, container.ID, struct {
|
||||
Force bool
|
||||
}{Force: true}); err != nil {
|
||||
d.logger.Warn("Failed to remove container",
|
||||
zap.String("container_id", container.ID),
|
||||
zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) GetLogs(ctx context.Context, executionID uuid.UUID) ([]string, error) {
|
||||
// Find container by execution ID
|
||||
filters := filters.NewArgs()
|
||||
filters.Add("label", fmt.Sprintf("faas.execution_id=%s", executionID.String()))
|
||||
containers, err := d.client.ContainerList(ctx, container.ListOptions{
|
||||
All: true,
|
||||
Filters: filters,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list containers: %w", err)
|
||||
}
|
||||
|
||||
if len(containers) == 0 {
|
||||
return nil, fmt.Errorf("no container found for execution %s", executionID.String())
|
||||
}
|
||||
|
||||
return d.getContainerLogs(ctx, containers[0].ID)
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) HealthCheck(ctx context.Context) error {
|
||||
_, err := d.client.Ping(ctx)
|
||||
if err != nil {
|
||||
return fmt.Errorf("Docker daemon not accessible: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) GetInfo(ctx context.Context) (*runtime.RuntimeInfo, error) {
|
||||
info, err := d.client.Info(ctx)
|
||||
if err != nil {
|
||||
return &runtime.RuntimeInfo{
|
||||
Type: "docker",
|
||||
Available: false,
|
||||
}, nil
|
||||
}
|
||||
|
||||
return &runtime.RuntimeInfo{
|
||||
Type: "docker",
|
||||
Version: info.ServerVersion,
|
||||
Available: true,
|
||||
Endpoint: d.client.DaemonHost(),
|
||||
Metadata: map[string]string{
|
||||
"containers": fmt.Sprintf("%d", info.Containers),
|
||||
"images": fmt.Sprintf("%d", info.Images),
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) ListContainers(ctx context.Context) ([]runtime.ContainerInfo, error) {
|
||||
filters := filters.NewArgs()
|
||||
filters.Add("label", "service=faas")
|
||||
containers, err := d.client.ContainerList(ctx, container.ListOptions{
|
||||
All: true,
|
||||
Filters: filters,
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list containers: %w", err)
|
||||
}
|
||||
|
||||
var result []runtime.ContainerInfo
|
||||
for _, container := range containers {
|
||||
functionIDStr, exists := container.Labels["faas.function_id"]
|
||||
if !exists {
|
||||
continue
|
||||
}
|
||||
|
||||
functionID, err := uuid.Parse(functionIDStr)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
|
||||
result = append(result, runtime.ContainerInfo{
|
||||
ID: container.ID,
|
||||
FunctionID: functionID,
|
||||
Status: container.Status,
|
||||
Image: container.Image,
|
||||
CreatedAt: time.Unix(container.Created, 0).Format(time.RFC3339),
|
||||
Labels: container.Labels,
|
||||
})
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) StopExecution(ctx context.Context, executionID uuid.UUID) error {
|
||||
// Find container by execution ID
|
||||
filters := filters.NewArgs()
|
||||
filters.Add("label", fmt.Sprintf("faas.execution_id=%s", executionID.String()))
|
||||
containers, err := d.client.ContainerList(ctx, container.ListOptions{
|
||||
All: true,
|
||||
Filters: filters,
|
||||
})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to list containers: %w", err)
|
||||
}
|
||||
|
||||
if len(containers) == 0 {
|
||||
return fmt.Errorf("no container found for execution %s", executionID.String())
|
||||
}
|
||||
|
||||
// Stop container
|
||||
timeout := 10
|
||||
return d.client.ContainerStop(ctx, containers[0].ID, container.StopOptions{Timeout: &timeout})
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) buildEnvironment(function *domain.FunctionDefinition, input json.RawMessage) []string {
|
||||
env := []string{
|
||||
fmt.Sprintf("FAAS_FUNCTION_ID=%s", function.ID.String()),
|
||||
fmt.Sprintf("FAAS_FUNCTION_NAME=%s", function.Name),
|
||||
fmt.Sprintf("FAAS_RUNTIME=%s", function.Runtime),
|
||||
fmt.Sprintf("FAAS_HANDLER=%s", function.Handler),
|
||||
fmt.Sprintf("FAAS_MEMORY=%d", function.Memory),
|
||||
fmt.Sprintf("FAAS_TIMEOUT=%s", function.Timeout.String()),
|
||||
}
|
||||
|
||||
// Add function-specific environment variables
|
||||
for key, value := range function.Environment {
|
||||
env = append(env, fmt.Sprintf("%s=%s", key, value))
|
||||
}
|
||||
|
||||
// Add input as environment variable if provided
|
||||
if input != nil {
|
||||
env = append(env, fmt.Sprintf("FAAS_INPUT=%s", string(input)))
|
||||
}
|
||||
|
||||
return env
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) getContainerLogs(ctx context.Context, containerID string) ([]string, error) {
|
||||
options := container.LogsOptions{
|
||||
ShowStdout: true,
|
||||
ShowStderr: true,
|
||||
Timestamps: false,
|
||||
}
|
||||
|
||||
reader, err := d.client.ContainerLogs(ctx, containerID, options)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get container logs: %w", err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
logs, err := io.ReadAll(reader)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read container logs: %w", err)
|
||||
}
|
||||
|
||||
// Split logs into lines and remove empty lines
|
||||
lines := strings.Split(string(logs), "\n")
|
||||
var result []string
|
||||
for _, line := range lines {
|
||||
if strings.TrimSpace(line) != "" {
|
||||
result = append(result, line)
|
||||
}
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (d *DockerRuntime) getMemoryUsage(ctx context.Context, containerID string) int {
|
||||
stats, err := d.client.ContainerStats(ctx, containerID, false)
|
||||
if err != nil {
|
||||
d.logger.Warn("Failed to get container stats", zap.Error(err))
|
||||
return 0
|
||||
}
|
||||
defer stats.Body.Close()
|
||||
|
||||
var containerStats struct {
|
||||
MemoryStats struct {
|
||||
Usage uint64 `json:"usage"`
|
||||
} `json:"memory_stats"`
|
||||
}
|
||||
if err := json.NewDecoder(stats.Body).Decode(&containerStats); err != nil {
|
||||
d.logger.Warn("Failed to decode container stats", zap.Error(err))
|
||||
return 0
|
||||
}
|
||||
|
||||
// Return memory usage in MB
|
||||
return int(containerStats.MemoryStats.Usage / 1024 / 1024)
|
||||
}
|
||||
@ -1,902 +0,0 @@
|
||||
package docker
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"io"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
"time"
|
||||
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/docker/api/types/image"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/google/uuid"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/runtime"
|
||||
)
|
||||
|
||||
type SimpleDockerRuntime struct {
|
||||
logger *zap.Logger
|
||||
client *client.Client
|
||||
}
|
||||
|
||||
func NewSimpleDockerRuntime(logger *zap.Logger) (*SimpleDockerRuntime, error) {
|
||||
var cli *client.Client
|
||||
var err error
|
||||
|
||||
// Try different socket paths with ping test
|
||||
socketPaths := []string{
|
||||
"unix:///run/user/1000/podman/podman.sock", // Podman socket (mounted from host)
|
||||
"unix:///var/run/docker.sock", // Standard Docker socket
|
||||
}
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
|
||||
defer cancel()
|
||||
|
||||
for _, socketPath := range socketPaths {
|
||||
logger.Info("Attempting to connect to socket", zap.String("path", socketPath))
|
||||
|
||||
cli, err = client.NewClientWithOpts(
|
||||
client.WithHost(socketPath),
|
||||
client.WithAPIVersionNegotiation(),
|
||||
)
|
||||
if err != nil {
|
||||
logger.Warn("Failed to create client", zap.String("path", socketPath), zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
// Test connection
|
||||
if _, err := cli.Ping(ctx); err != nil {
|
||||
logger.Warn("Failed to ping daemon", zap.String("path", socketPath), zap.Error(err))
|
||||
continue
|
||||
}
|
||||
|
||||
logger.Info("Successfully connected to Docker/Podman", zap.String("path", socketPath))
|
||||
break
|
||||
}
|
||||
|
||||
// Final fallback to environment
|
||||
if cli == nil {
|
||||
logger.Info("Trying default Docker environment")
|
||||
cli, err = client.NewClientWithOpts(client.FromEnv, client.WithAPIVersionNegotiation())
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create Docker client: %w", err)
|
||||
}
|
||||
|
||||
if _, err := cli.Ping(ctx); err != nil {
|
||||
return nil, fmt.Errorf("failed to ping Docker/Podman daemon: %w", err)
|
||||
}
|
||||
}
|
||||
|
||||
if cli == nil {
|
||||
return nil, fmt.Errorf("no working Docker/Podman socket found")
|
||||
}
|
||||
|
||||
return &SimpleDockerRuntime{
|
||||
logger: logger,
|
||||
client: cli,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) Execute(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage) (*domain.ExecutionResult, error) {
|
||||
return s.ExecuteWithLogStreaming(ctx, function, input, nil)
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) ExecuteWithLogStreaming(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage, logCallback runtime.LogStreamCallback) (*domain.ExecutionResult, error) {
|
||||
startTime := time.Now()
|
||||
|
||||
s.logger.Info("Starting ExecuteWithLogStreaming",
|
||||
zap.String("function_id", function.ID.String()),
|
||||
zap.String("function_name", function.Name),
|
||||
zap.Bool("has_log_callback", logCallback != nil))
|
||||
|
||||
// Create container
|
||||
containerID, err := s.createContainer(ctx, function, input)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to create container: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Debug("Container created successfully",
|
||||
zap.String("container_id", containerID),
|
||||
zap.String("function_id", function.ID.String()))
|
||||
|
||||
// Start container
|
||||
if err := s.client.ContainerStart(ctx, containerID, container.StartOptions{}); err != nil {
|
||||
s.cleanupContainer(ctx, containerID)
|
||||
return nil, fmt.Errorf("failed to start container: %w", err)
|
||||
}
|
||||
|
||||
// Create channels for log streaming
|
||||
logChan := make(chan string, 1000) // Buffer for logs
|
||||
doneChan := make(chan struct{}) // Signal to stop streaming
|
||||
|
||||
// Start log streaming in a goroutine
|
||||
s.logger.Debug("Starting log streaming goroutine",
|
||||
zap.String("container_id", containerID),
|
||||
zap.String("function_id", function.ID.String()))
|
||||
go s.streamContainerLogs(context.Background(), containerID, logChan, doneChan)
|
||||
|
||||
// Create timeout context based on function timeout
|
||||
var timeoutCtx context.Context
|
||||
var cancel context.CancelFunc
|
||||
if function.Timeout.Duration > 0 {
|
||||
timeoutCtx, cancel = context.WithTimeout(ctx, function.Timeout.Duration)
|
||||
defer cancel()
|
||||
s.logger.Debug("Set execution timeout",
|
||||
zap.Duration("timeout", function.Timeout.Duration),
|
||||
zap.String("container_id", containerID))
|
||||
} else {
|
||||
timeoutCtx = ctx
|
||||
s.logger.Debug("No execution timeout set",
|
||||
zap.String("container_id", containerID))
|
||||
}
|
||||
|
||||
// For streaming logs, collect logs in a separate goroutine and call the callback
|
||||
var streamedLogs []string
|
||||
logsMutex := &sync.Mutex{}
|
||||
|
||||
if logCallback != nil {
|
||||
s.logger.Info("Starting log callback goroutine",
|
||||
zap.String("container_id", containerID))
|
||||
go func() {
|
||||
// Keep track of the last time we called the callback to avoid too frequent updates
|
||||
lastUpdate := time.Now()
|
||||
ticker := time.NewTicker(1 * time.Second) // Update at most once per second
|
||||
defer ticker.Stop()
|
||||
|
||||
for {
|
||||
select {
|
||||
case log, ok := <-logChan:
|
||||
if !ok {
|
||||
// Channel closed, exit the goroutine
|
||||
s.logger.Debug("Log channel closed, exiting callback goroutine",
|
||||
zap.String("container_id", containerID))
|
||||
return
|
||||
}
|
||||
|
||||
s.logger.Debug("Received log line from channel",
|
||||
zap.String("container_id", containerID),
|
||||
zap.String("log_line", log))
|
||||
|
||||
logsMutex.Lock()
|
||||
streamedLogs = append(streamedLogs, log)
|
||||
shouldUpdate := time.Since(lastUpdate) >= 1*time.Second
|
||||
currentLogCount := len(streamedLogs)
|
||||
logsMutex.Unlock()
|
||||
|
||||
// Call the callback if it's been at least 1 second since last update
|
||||
if shouldUpdate {
|
||||
logsMutex.Lock()
|
||||
logsCopy := make([]string, len(streamedLogs))
|
||||
copy(logsCopy, streamedLogs)
|
||||
logsMutex.Unlock()
|
||||
|
||||
s.logger.Info("Calling log callback with accumulated logs",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Int("log_count", len(logsCopy)))
|
||||
|
||||
// Call the callback with the current logs
|
||||
if err := logCallback(logsCopy); err != nil {
|
||||
s.logger.Error("Failed to stream logs to callback",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Error(err))
|
||||
}
|
||||
lastUpdate = time.Now()
|
||||
} else {
|
||||
s.logger.Debug("Skipping callback update (too frequent)",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Int("current_log_count", currentLogCount),
|
||||
zap.Duration("time_since_last_update", time.Since(lastUpdate)))
|
||||
}
|
||||
case <-ticker.C:
|
||||
// Periodic update to ensure logs are streamed even if no new logs arrive
|
||||
logsMutex.Lock()
|
||||
if len(streamedLogs) > 0 && time.Since(lastUpdate) >= 1*time.Second {
|
||||
logsCopy := make([]string, len(streamedLogs))
|
||||
copy(logsCopy, streamedLogs)
|
||||
logCount := len(logsCopy)
|
||||
logsMutex.Unlock()
|
||||
|
||||
s.logger.Debug("Periodic callback update triggered",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Int("log_count", logCount))
|
||||
|
||||
// Call the callback with the current logs
|
||||
if err := logCallback(logsCopy); err != nil {
|
||||
s.logger.Error("Failed to stream logs to callback (periodic)",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Error(err))
|
||||
}
|
||||
lastUpdate = time.Now()
|
||||
} else {
|
||||
logsMutex.Unlock()
|
||||
s.logger.Debug("Skipping periodic callback (no logs or too frequent)",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Duration("time_since_last_update", time.Since(lastUpdate)))
|
||||
}
|
||||
}
|
||||
}
|
||||
}()
|
||||
} else {
|
||||
s.logger.Debug("No log callback provided, logs will be collected at the end",
|
||||
zap.String("container_id", containerID))
|
||||
}
|
||||
|
||||
// Wait for container to finish with timeout
|
||||
statusCh, errCh := s.client.ContainerWait(timeoutCtx, containerID, container.WaitConditionNotRunning)
|
||||
|
||||
var timedOut bool
|
||||
select {
|
||||
case err := <-errCh:
|
||||
close(doneChan) // Stop log streaming
|
||||
s.cleanupContainer(ctx, containerID)
|
||||
return nil, fmt.Errorf("error waiting for container: %w", err)
|
||||
case <-statusCh:
|
||||
// Container finished normally
|
||||
case <-timeoutCtx.Done():
|
||||
// Timeout occurred
|
||||
timedOut = true
|
||||
// doneChan will be closed below in the common cleanup
|
||||
|
||||
// Stop the container in the background - don't wait for it to complete
|
||||
go func() {
|
||||
// Use a very short timeout for stopping, then kill if needed
|
||||
if err := s.client.ContainerStop(context.Background(), containerID, container.StopOptions{
|
||||
Timeout: &[]int{1}[0], // Only 1 second grace period for stop
|
||||
}); err != nil {
|
||||
s.logger.Warn("Failed to stop timed out container gracefully, attempting to kill",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Error(err))
|
||||
// If stop fails, try to kill it immediately
|
||||
if killErr := s.client.ContainerKill(context.Background(), containerID, "SIGKILL"); killErr != nil {
|
||||
s.logger.Error("Failed to kill timed out container",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Error(killErr))
|
||||
}
|
||||
}
|
||||
}()
|
||||
}
|
||||
|
||||
// Collect all streamed logs
|
||||
var logs []string
|
||||
if !timedOut {
|
||||
// Collect any remaining logs from the channel
|
||||
close(doneChan) // Stop log streaming
|
||||
|
||||
// Give a moment for final logs to be processed
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
|
||||
if logCallback == nil {
|
||||
// If no callback, collect all logs at the end
|
||||
for log := range logChan {
|
||||
logs = append(logs, log)
|
||||
}
|
||||
} else {
|
||||
// If we have a callback, use the streamed logs plus any remaining in channel
|
||||
logsMutex.Lock()
|
||||
logs = make([]string, len(streamedLogs))
|
||||
copy(logs, streamedLogs)
|
||||
logsMutex.Unlock()
|
||||
|
||||
// Collect any remaining logs in the channel
|
||||
remainingLogs := make([]string, 0)
|
||||
for {
|
||||
select {
|
||||
case log := <-logChan:
|
||||
remainingLogs = append(remainingLogs, log)
|
||||
default:
|
||||
goto done
|
||||
}
|
||||
}
|
||||
done:
|
||||
logs = append(logs, remainingLogs...)
|
||||
}
|
||||
} else {
|
||||
logs = []string{"Container execution timed out"}
|
||||
}
|
||||
|
||||
var stats *container.InspectResponse
|
||||
|
||||
// For timed-out containers, still try to collect logs but with a short timeout
|
||||
if timedOut {
|
||||
// Collect any remaining logs from the channel before adding timeout message
|
||||
// doneChan was already closed above
|
||||
if logCallback == nil {
|
||||
// If no callback was used, try to collect logs directly but with short timeout
|
||||
logCtx, logCancel := context.WithTimeout(context.Background(), 2*time.Second)
|
||||
finalLogs, err := s.getContainerLogs(logCtx, containerID)
|
||||
logCancel()
|
||||
if err == nil {
|
||||
logs = finalLogs
|
||||
}
|
||||
} else {
|
||||
// If callback was used, use the streamed logs
|
||||
logsMutex.Lock()
|
||||
logs = make([]string, len(streamedLogs))
|
||||
copy(logs, streamedLogs)
|
||||
logsMutex.Unlock()
|
||||
}
|
||||
logs = append(logs, "Container execution timed out")
|
||||
} else {
|
||||
// Get container stats
|
||||
statsResponse, err := s.client.ContainerInspect(ctx, containerID)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to inspect container", zap.Error(err))
|
||||
} else {
|
||||
stats = &statsResponse
|
||||
}
|
||||
}
|
||||
|
||||
// Get execution result
|
||||
result := &domain.ExecutionResult{
|
||||
Logs: logs,
|
||||
Duration: time.Since(startTime).Truncate(time.Millisecond),
|
||||
}
|
||||
|
||||
// Handle timeout case
|
||||
if timedOut {
|
||||
result.Error = fmt.Sprintf("Function execution timed out after %v", function.Timeout.Duration)
|
||||
result.Output = json.RawMessage(`{"error": "Function execution timed out"}`)
|
||||
} else {
|
||||
// Try to get output from container for successful executions
|
||||
if stats.State != nil {
|
||||
if stats.State.ExitCode == 0 {
|
||||
// Try to get output from container
|
||||
output, err := s.getContainerOutput(ctx, containerID)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to get container output", zap.Error(err))
|
||||
result.Output = json.RawMessage(`{"error": "Failed to retrieve output"}`)
|
||||
} else {
|
||||
result.Output = output
|
||||
}
|
||||
} else {
|
||||
result.Error = fmt.Sprintf("Container exited with code %d", stats.State.ExitCode)
|
||||
result.Output = json.RawMessage(`{"error": "Container execution failed"}`)
|
||||
}
|
||||
} else {
|
||||
s.logger.Warn("Container state not available")
|
||||
}
|
||||
}
|
||||
|
||||
// Cleanup container - for timed-out containers, do this in background
|
||||
if timedOut {
|
||||
go func() {
|
||||
s.cleanupContainer(context.Background(), containerID)
|
||||
}()
|
||||
} else {
|
||||
s.cleanupContainer(ctx, containerID)
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) Deploy(ctx context.Context, function *domain.FunctionDefinition) error {
|
||||
s.logger.Info("Deploying function image",
|
||||
zap.String("function_id", function.ID.String()),
|
||||
zap.String("image", function.Image))
|
||||
|
||||
// Pull the image if it doesn't exist
|
||||
_, _, err := s.client.ImageInspectWithRaw(ctx, function.Image)
|
||||
if err != nil {
|
||||
// Image doesn't exist, try to pull it
|
||||
s.logger.Info("Pulling image", zap.String("image", function.Image))
|
||||
reader, err := s.client.ImagePull(ctx, function.Image, image.PullOptions{})
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to pull image %s: %w", function.Image, err)
|
||||
}
|
||||
defer reader.Close()
|
||||
|
||||
// Wait for pull to complete (we could parse the output but for now we'll just wait)
|
||||
buf := make([]byte, 1024)
|
||||
for {
|
||||
_, err := reader.Read(buf)
|
||||
if err != nil {
|
||||
break
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) Remove(ctx context.Context, functionID uuid.UUID) error {
|
||||
s.logger.Info("Removing function resources", zap.String("function_id", functionID.String()))
|
||||
// In a real implementation, we would remove any function-specific resources
|
||||
// For now, we don't need to do anything as containers are cleaned up after execution
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) GetLogs(ctx context.Context, executionID uuid.UUID) ([]string, error) {
|
||||
// In a real implementation, we would need to store container IDs associated with execution IDs
|
||||
// For now, we'll return a placeholder
|
||||
return []string{
|
||||
"Function execution logs would appear here",
|
||||
"In a full implementation, these would be retrieved from the Docker container",
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) HealthCheck(ctx context.Context) error {
|
||||
_, err := s.client.Ping(ctx)
|
||||
return err
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) GetInfo(ctx context.Context) (*runtime.RuntimeInfo, error) {
|
||||
info, err := s.client.Info(ctx)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get Docker info: %w", err)
|
||||
}
|
||||
|
||||
return &runtime.RuntimeInfo{
|
||||
Type: "docker",
|
||||
Version: info.ServerVersion,
|
||||
Available: true,
|
||||
Endpoint: s.client.DaemonHost(),
|
||||
Metadata: map[string]string{
|
||||
"containers": fmt.Sprintf("%d", info.Containers),
|
||||
"images": fmt.Sprintf("%d", info.Images),
|
||||
"docker_root_dir": info.DockerRootDir,
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) ListContainers(ctx context.Context) ([]runtime.ContainerInfo, error) {
|
||||
containers, err := s.client.ContainerList(ctx, container.ListOptions{})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to list containers: %w", err)
|
||||
}
|
||||
|
||||
var containerInfos []runtime.ContainerInfo
|
||||
for _, c := range containers {
|
||||
containerInfo := runtime.ContainerInfo{
|
||||
ID: c.ID,
|
||||
Status: c.State,
|
||||
Image: c.Image,
|
||||
}
|
||||
|
||||
if len(c.Names) > 0 {
|
||||
containerInfo.ID = c.Names[0]
|
||||
}
|
||||
|
||||
containerInfos = append(containerInfos, containerInfo)
|
||||
}
|
||||
|
||||
return containerInfos, nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) StopExecution(ctx context.Context, executionID uuid.UUID) error {
|
||||
s.logger.Info("Stopping execution", zap.String("execution_id", executionID.String()))
|
||||
// In a real implementation, we would need to map execution IDs to container IDs
|
||||
// For now, we'll just log that this was called
|
||||
return nil
|
||||
}
|
||||
|
||||
// Helper methods
|
||||
|
||||
func (s *SimpleDockerRuntime) createContainer(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage) (string, error) {
|
||||
// Prepare environment variables
|
||||
env := []string{}
|
||||
for key, value := range function.Environment {
|
||||
env = append(env, fmt.Sprintf("%s=%s", key, value))
|
||||
}
|
||||
|
||||
// Add input as environment variable
|
||||
inputStr := string(input)
|
||||
if inputStr != "" {
|
||||
env = append(env, fmt.Sprintf("FUNCTION_INPUT=%s", inputStr))
|
||||
}
|
||||
|
||||
// Add function code as environment variable for dynamic languages
|
||||
env = append(env, fmt.Sprintf("FUNCTION_CODE=%s", function.Code))
|
||||
env = append(env, fmt.Sprintf("FUNCTION_HANDLER=%s", function.Handler))
|
||||
|
||||
// Create container config with proper command for runtime
|
||||
config := &container.Config{
|
||||
Image: function.Image,
|
||||
Env: env,
|
||||
AttachStdout: true,
|
||||
AttachStderr: true,
|
||||
}
|
||||
|
||||
// Set command based on runtime
|
||||
switch function.Runtime {
|
||||
case "nodejs", "nodejs18", "nodejs20":
|
||||
config.Cmd = []string{"sh", "-c", `
|
||||
echo "$FUNCTION_CODE" > /tmp/index.js &&
|
||||
echo "const handler = require('/tmp/index.js').handler;
|
||||
const input = process.env.FUNCTION_INPUT ? JSON.parse(process.env.FUNCTION_INPUT) : {};
|
||||
const context = { functionName: '` + function.Name + `' };
|
||||
console.log('<stdout>');
|
||||
handler(input, context).then(result => {
|
||||
console.log('</stdout>');
|
||||
console.log('<result>' + JSON.stringify(result) + '</result>');
|
||||
}).catch(err => {
|
||||
console.log('</stdout>');
|
||||
console.error('<result>{\"error\": \"' + err.message + '\"}</result>');
|
||||
process.exit(1);
|
||||
});" > /tmp/runner.js &&
|
||||
node /tmp/runner.js
|
||||
`}
|
||||
case "python", "python3", "python3.9", "python3.10", "python3.11":
|
||||
config.Cmd = []string{"sh", "-c", `
|
||||
echo "$FUNCTION_CODE" > /tmp/handler.py &&
|
||||
echo "import json, os, sys; sys.path.insert(0, '/tmp'); from handler import handler;
|
||||
input_data = json.loads(os.environ.get('FUNCTION_INPUT', '{}'));
|
||||
context = {'function_name': '` + function.Name + `'};
|
||||
print('<stdout>');
|
||||
try:
|
||||
result = handler(input_data, context);
|
||||
print('</stdout>');
|
||||
print('<result>' + json.dumps(result) + '</result>');
|
||||
except Exception as e:
|
||||
print('</stdout>');
|
||||
print('<result>{\"error\": \"' + str(e) + '\"}</result>', file=sys.stderr);
|
||||
sys.exit(1);" > /tmp/runner.py &&
|
||||
python /tmp/runner.py
|
||||
`}
|
||||
default:
|
||||
// For other runtimes, assume they handle execution themselves
|
||||
// This is for pre-built container images
|
||||
}
|
||||
|
||||
// Create host config with resource limits
|
||||
hostConfig := &container.HostConfig{
|
||||
Resources: container.Resources{
|
||||
Memory: int64(function.Memory) * 1024 * 1024, // Convert MB to bytes
|
||||
},
|
||||
}
|
||||
|
||||
// Apply timeout if set
|
||||
if function.Timeout.Duration > 0 {
|
||||
// Docker doesn't have a direct timeout, but we can set a reasonable upper limit
|
||||
// In a production system, you'd want to implement actual timeout handling
|
||||
hostConfig.Resources.NanoCPUs = 1000000000 // 1 CPU
|
||||
}
|
||||
|
||||
resp, err := s.client.ContainerCreate(ctx, config, hostConfig, nil, nil, "")
|
||||
if err != nil {
|
||||
return "", fmt.Errorf("failed to create container: %w", err)
|
||||
}
|
||||
|
||||
return resp.ID, nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) getContainerLogs(ctx context.Context, containerID string) ([]string, error) {
|
||||
// Get container logs
|
||||
logs, err := s.client.ContainerLogs(ctx, containerID, container.LogsOptions{
|
||||
ShowStdout: true,
|
||||
ShowStderr: true,
|
||||
Tail: "100", // Get last 100 lines
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get container logs: %w", err)
|
||||
}
|
||||
defer logs.Close()
|
||||
|
||||
// Read the actual logs content
|
||||
logData, err := io.ReadAll(logs)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read log data: %w", err)
|
||||
}
|
||||
|
||||
// Parse Docker logs to remove binary headers
|
||||
rawOutput := parseDockerLogs(logData)
|
||||
|
||||
// Parse the XML-tagged output to extract logs
|
||||
parsedLogs, _, err := s.parseContainerOutput(rawOutput)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to parse container output for logs", zap.Error(err))
|
||||
// Fallback to raw output split by lines
|
||||
lines := strings.Split(strings.TrimSpace(rawOutput), "\n")
|
||||
cleanLines := make([]string, 0, len(lines))
|
||||
for _, line := range lines {
|
||||
if trimmed := strings.TrimSpace(line); trimmed != "" {
|
||||
cleanLines = append(cleanLines, trimmed)
|
||||
}
|
||||
}
|
||||
return cleanLines, nil
|
||||
}
|
||||
|
||||
// If no logs were parsed from <stdout> tags, fallback to basic parsing
|
||||
if len(parsedLogs) == 0 {
|
||||
lines := strings.Split(strings.TrimSpace(rawOutput), "\n")
|
||||
for _, line := range lines {
|
||||
if trimmed := strings.TrimSpace(line); trimmed != "" && !strings.Contains(trimmed, "<result>") && !strings.Contains(trimmed, "</result>") {
|
||||
parsedLogs = append(parsedLogs, trimmed)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return parsedLogs, nil
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) getContainerOutput(ctx context.Context, containerID string) (json.RawMessage, error) {
|
||||
// Get container logs as output
|
||||
logs, err := s.client.ContainerLogs(ctx, containerID, container.LogsOptions{
|
||||
ShowStdout: true,
|
||||
ShowStderr: true,
|
||||
Tail: "100", // Get last 100 lines
|
||||
})
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get container logs: %w", err)
|
||||
}
|
||||
defer logs.Close()
|
||||
|
||||
// Read the actual logs content
|
||||
logData, err := io.ReadAll(logs)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to read log data: %w", err)
|
||||
}
|
||||
|
||||
// Parse Docker logs to remove binary headers
|
||||
rawOutput := parseDockerLogs(logData)
|
||||
|
||||
// Parse the XML-tagged output to extract the result
|
||||
_, result, err := s.parseContainerOutput(rawOutput)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to parse container output for result", zap.Error(err))
|
||||
// Fallback to legacy parsing
|
||||
logContent := strings.TrimSpace(rawOutput)
|
||||
if json.Valid([]byte(logContent)) && logContent != "" {
|
||||
return json.RawMessage(logContent), nil
|
||||
} else {
|
||||
// Return the output wrapped in a JSON object
|
||||
fallbackResult := map[string]interface{}{
|
||||
"result": "Function executed successfully",
|
||||
"output": logContent,
|
||||
"timestamp": time.Now().UTC(),
|
||||
}
|
||||
resultJSON, _ := json.Marshal(fallbackResult)
|
||||
return json.RawMessage(resultJSON), nil
|
||||
}
|
||||
}
|
||||
|
||||
// If no result was found in XML tags, provide a default success result
|
||||
if result == nil {
|
||||
defaultResult := map[string]interface{}{
|
||||
"result": "Function executed successfully",
|
||||
"message": "No result output found",
|
||||
"timestamp": time.Now().UTC(),
|
||||
}
|
||||
resultJSON, _ := json.Marshal(defaultResult)
|
||||
return json.RawMessage(resultJSON), nil
|
||||
}
|
||||
|
||||
return result, nil
|
||||
}
|
||||
|
||||
// parseDockerLogs parses Docker log output which includes 8-byte headers
|
||||
func parseDockerLogs(logData []byte) string {
|
||||
var cleanOutput strings.Builder
|
||||
|
||||
for len(logData) > 8 {
|
||||
// Docker log header: [STREAM_TYPE, 0, 0, 0, SIZE1, SIZE2, SIZE3, SIZE4]
|
||||
// Skip the first 8 bytes (header)
|
||||
headerSize := 8
|
||||
if len(logData) < headerSize {
|
||||
break
|
||||
}
|
||||
|
||||
// Extract size from bytes 4-7 (big endian)
|
||||
size := int(logData[4])<<24 + int(logData[5])<<16 + int(logData[6])<<8 + int(logData[7])
|
||||
|
||||
if len(logData) < headerSize+size {
|
||||
// If the remaining data is less than expected size, take what we have
|
||||
size = len(logData) - headerSize
|
||||
}
|
||||
|
||||
if size > 0 {
|
||||
// Extract the actual log content
|
||||
content := string(logData[headerSize : headerSize+size])
|
||||
cleanOutput.WriteString(content)
|
||||
}
|
||||
|
||||
// Move to next log entry
|
||||
logData = logData[headerSize+size:]
|
||||
}
|
||||
|
||||
return cleanOutput.String()
|
||||
}
|
||||
|
||||
// parseContainerOutput parses container output that contains <stdout> and <result> XML tags
|
||||
func (s *SimpleDockerRuntime) parseContainerOutput(rawOutput string) (logs []string, result json.RawMessage, err error) {
|
||||
// Extract stdout content (logs) - use DOTALL flag for multiline matching
|
||||
stdoutRegex := regexp.MustCompile(`(?s)<stdout>(.*?)</stdout>`)
|
||||
stdoutMatch := stdoutRegex.FindStringSubmatch(rawOutput)
|
||||
if len(stdoutMatch) > 1 {
|
||||
stdoutContent := strings.TrimSpace(stdoutMatch[1])
|
||||
if stdoutContent != "" {
|
||||
// Split stdout content into lines for logs
|
||||
lines := strings.Split(stdoutContent, "\n")
|
||||
// Clean up empty lines and trim whitespace
|
||||
cleanLogs := make([]string, 0, len(lines))
|
||||
for _, line := range lines {
|
||||
if trimmed := strings.TrimSpace(line); trimmed != "" {
|
||||
cleanLogs = append(cleanLogs, trimmed)
|
||||
}
|
||||
}
|
||||
logs = cleanLogs
|
||||
}
|
||||
}
|
||||
|
||||
// Extract result content - use DOTALL flag for multiline matching
|
||||
resultRegex := regexp.MustCompile(`(?s)<result>(.*?)</result>`)
|
||||
resultMatch := resultRegex.FindStringSubmatch(rawOutput)
|
||||
if len(resultMatch) > 1 {
|
||||
resultContent := strings.TrimSpace(resultMatch[1])
|
||||
if resultContent != "" {
|
||||
// Validate JSON
|
||||
if json.Valid([]byte(resultContent)) {
|
||||
result = json.RawMessage(resultContent)
|
||||
} else {
|
||||
// If not valid JSON, wrap it
|
||||
wrappedResult := map[string]interface{}{
|
||||
"output": resultContent,
|
||||
}
|
||||
resultJSON, _ := json.Marshal(wrappedResult)
|
||||
result = json.RawMessage(resultJSON)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// If no result tag found, treat entire output as result (fallback for non-tagged output)
|
||||
if result == nil {
|
||||
// Remove any XML tags from the output for fallback
|
||||
cleanOutput := regexp.MustCompile(`(?s)<[^>]*>`).ReplaceAllString(rawOutput, "")
|
||||
cleanOutput = strings.TrimSpace(cleanOutput)
|
||||
|
||||
if cleanOutput != "" {
|
||||
if json.Valid([]byte(cleanOutput)) {
|
||||
result = json.RawMessage(cleanOutput)
|
||||
} else {
|
||||
// Wrap non-JSON output
|
||||
wrappedResult := map[string]interface{}{
|
||||
"output": cleanOutput,
|
||||
}
|
||||
resultJSON, _ := json.Marshal(wrappedResult)
|
||||
result = json.RawMessage(resultJSON)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return logs, result, nil
|
||||
}
|
||||
|
||||
// streamContainerLogs streams logs from a running container and sends them to a channel
|
||||
func (s *SimpleDockerRuntime) streamContainerLogs(ctx context.Context, containerID string, logChan chan<- string, doneChan <-chan struct{}) {
|
||||
defer close(logChan)
|
||||
|
||||
s.logger.Info("Starting container log streaming",
|
||||
zap.String("container_id", containerID))
|
||||
|
||||
// Get container logs with follow option
|
||||
logs, err := s.client.ContainerLogs(ctx, containerID, container.LogsOptions{
|
||||
ShowStdout: true,
|
||||
ShowStderr: true,
|
||||
Follow: true,
|
||||
Timestamps: false,
|
||||
})
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to get container logs for streaming",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Error(err))
|
||||
return
|
||||
}
|
||||
defer logs.Close()
|
||||
|
||||
s.logger.Debug("Successfully got container logs stream",
|
||||
zap.String("container_id", containerID))
|
||||
|
||||
// Create a context that cancels when doneChan receives a signal
|
||||
streamCtx, cancel := context.WithCancel(ctx)
|
||||
defer cancel()
|
||||
|
||||
// Goroutine to listen for done signal
|
||||
go func() {
|
||||
select {
|
||||
case <-doneChan:
|
||||
cancel()
|
||||
case <-streamCtx.Done():
|
||||
}
|
||||
}()
|
||||
|
||||
// Buffer for reading log data
|
||||
buf := make([]byte, 4096)
|
||||
|
||||
// Continue reading until context is cancelled or EOF
|
||||
totalLogLines := 0
|
||||
for {
|
||||
select {
|
||||
case <-streamCtx.Done():
|
||||
s.logger.Debug("Stream context cancelled, stopping log streaming",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Int("total_lines_streamed", totalLogLines))
|
||||
return
|
||||
default:
|
||||
n, err := logs.Read(buf)
|
||||
if n > 0 {
|
||||
s.logger.Debug("Read log data from container",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Int("bytes_read", n))
|
||||
|
||||
// Parse Docker logs to remove binary headers
|
||||
logData := buf[:n]
|
||||
rawOutput := parseDockerLogs(logData)
|
||||
|
||||
// Send each line to the log channel, filtering out XML tags
|
||||
lines := strings.Split(rawOutput, "\n")
|
||||
for _, line := range lines {
|
||||
trimmedLine := strings.TrimSpace(line)
|
||||
// Skip empty lines and XML tags
|
||||
if trimmedLine != "" &&
|
||||
!strings.HasPrefix(trimmedLine, "<stdout>") &&
|
||||
!strings.HasPrefix(trimmedLine, "</stdout>") &&
|
||||
!strings.HasPrefix(trimmedLine, "<result>") &&
|
||||
!strings.HasPrefix(trimmedLine, "</result>") &&
|
||||
trimmedLine != "<stdout>" &&
|
||||
trimmedLine != "</stdout>" &&
|
||||
trimmedLine != "<result>" &&
|
||||
trimmedLine != "</result>" {
|
||||
|
||||
totalLogLines++
|
||||
s.logger.Debug("Sending filtered log line to channel",
|
||||
zap.String("container_id", containerID),
|
||||
zap.String("log_line", trimmedLine),
|
||||
zap.Int("total_lines", totalLogLines))
|
||||
|
||||
select {
|
||||
case logChan <- trimmedLine:
|
||||
s.logger.Debug("Successfully sent filtered log line to channel",
|
||||
zap.String("container_id", containerID))
|
||||
case <-streamCtx.Done():
|
||||
s.logger.Debug("Stream context cancelled while sending log line",
|
||||
zap.String("container_id", containerID))
|
||||
return
|
||||
default:
|
||||
// Log buffer is full, warn but continue reading to avoid blocking
|
||||
s.logger.Warn("Log buffer full, dropping log line",
|
||||
zap.String("container_id", containerID),
|
||||
zap.String("dropped_line", trimmedLine))
|
||||
}
|
||||
} else if trimmedLine != "" {
|
||||
s.logger.Debug("Filtered out XML tag",
|
||||
zap.String("container_id", containerID),
|
||||
zap.String("filtered_line", trimmedLine))
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
if err == io.EOF {
|
||||
s.logger.Debug("Got EOF from container logs, container might still be running",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Int("total_lines_streamed", totalLogLines))
|
||||
// Container might still be running, continue reading
|
||||
time.Sleep(100 * time.Millisecond)
|
||||
continue
|
||||
} else {
|
||||
s.logger.Error("Error reading container logs",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Error(err),
|
||||
zap.Int("total_lines_streamed", totalLogLines))
|
||||
return
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (s *SimpleDockerRuntime) cleanupContainer(ctx context.Context, containerID string) {
|
||||
// Remove container
|
||||
if err := s.client.ContainerRemove(ctx, containerID, container.RemoveOptions{
|
||||
Force: true,
|
||||
}); err != nil {
|
||||
s.logger.Warn("Failed to remove container",
|
||||
zap.String("container_id", containerID),
|
||||
zap.Error(err))
|
||||
}
|
||||
}
|
||||
@ -1,68 +0,0 @@
|
||||
package runtime
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// LogStreamCallback is a function that can be called to stream logs during execution
|
||||
type LogStreamCallback func(logs []string) error
|
||||
|
||||
// RuntimeBackend provides function execution capabilities
|
||||
type RuntimeBackend interface {
|
||||
// Execute runs a function with given input
|
||||
Execute(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage) (*domain.ExecutionResult, error)
|
||||
|
||||
// ExecuteWithLogStreaming runs a function with given input and streams logs during execution
|
||||
ExecuteWithLogStreaming(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage, logCallback LogStreamCallback) (*domain.ExecutionResult, error)
|
||||
|
||||
// Deploy prepares function for execution
|
||||
Deploy(ctx context.Context, function *domain.FunctionDefinition) error
|
||||
|
||||
// Remove cleans up function resources
|
||||
Remove(ctx context.Context, functionID uuid.UUID) error
|
||||
|
||||
// GetLogs retrieves execution logs
|
||||
GetLogs(ctx context.Context, executionID uuid.UUID) ([]string, error)
|
||||
|
||||
// HealthCheck verifies runtime availability
|
||||
HealthCheck(ctx context.Context) error
|
||||
|
||||
// GetInfo returns runtime information
|
||||
GetInfo(ctx context.Context) (*RuntimeInfo, error)
|
||||
|
||||
// ListContainers returns active containers for functions
|
||||
ListContainers(ctx context.Context) ([]ContainerInfo, error)
|
||||
|
||||
// StopExecution stops a running execution
|
||||
StopExecution(ctx context.Context, executionID uuid.UUID) error
|
||||
}
|
||||
|
||||
// RuntimeInfo contains runtime backend information
|
||||
type RuntimeInfo struct {
|
||||
Type string `json:"type"`
|
||||
Version string `json:"version"`
|
||||
Available bool `json:"available"`
|
||||
Endpoint string `json:"endpoint,omitempty"`
|
||||
Metadata map[string]string `json:"metadata,omitempty"`
|
||||
}
|
||||
|
||||
// ContainerInfo contains information about a running container
|
||||
type ContainerInfo struct {
|
||||
ID string `json:"id"`
|
||||
FunctionID uuid.UUID `json:"function_id"`
|
||||
Status string `json:"status"`
|
||||
Image string `json:"image"`
|
||||
CreatedAt string `json:"created_at"`
|
||||
Labels map[string]string `json:"labels,omitempty"`
|
||||
}
|
||||
|
||||
// RuntimeFactory creates runtime backends
|
||||
type RuntimeFactory interface {
|
||||
CreateRuntime(ctx context.Context, runtimeType string, config map[string]interface{}) (RuntimeBackend, error)
|
||||
GetSupportedRuntimes() []string
|
||||
GetDefaultConfig(runtimeType string) map[string]interface{}
|
||||
}
|
||||
@ -1,75 +0,0 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
)
|
||||
|
||||
type authService struct {
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewAuthService(logger *zap.Logger) AuthService {
|
||||
return &authService{
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
// Mock implementation for now - this should integrate with the KMS auth system
|
||||
func (s *authService) GetAuthContext(ctx context.Context) (*domain.AuthContext, error) {
|
||||
// For now, return a mock auth context
|
||||
// In a real implementation, this would extract auth info from the request context
|
||||
// that was set by middleware that validates tokens with the KMS service
|
||||
|
||||
return &domain.AuthContext{
|
||||
UserID: "admin@example.com",
|
||||
AppID: "faas-service",
|
||||
Permissions: []string{"faas.read", "faas.write", "faas.execute", "faas.deploy", "faas.delete"},
|
||||
Claims: map[string]string{
|
||||
"user_type": "admin",
|
||||
},
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *authService) HasPermission(ctx context.Context, permission string) bool {
|
||||
authCtx, err := s.GetAuthContext(ctx)
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to get auth context for permission check", zap.Error(err))
|
||||
return false
|
||||
}
|
||||
|
||||
// Check for exact permission match
|
||||
for _, perm := range authCtx.Permissions {
|
||||
if perm == permission {
|
||||
return true
|
||||
}
|
||||
|
||||
// Check for wildcard permissions (e.g., "faas.*" grants all faas permissions)
|
||||
if len(perm) > 2 && perm[len(perm)-1] == '*' {
|
||||
prefix := perm[:len(perm)-1]
|
||||
if len(permission) >= len(prefix) && permission[:len(prefix)] == prefix {
|
||||
return true
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
s.logger.Debug("Permission denied",
|
||||
zap.String("user_id", authCtx.UserID),
|
||||
zap.String("permission", permission),
|
||||
zap.Strings("user_permissions", authCtx.Permissions))
|
||||
|
||||
return false
|
||||
}
|
||||
|
||||
func (s *authService) ValidatePermissions(ctx context.Context, permissions []string) error {
|
||||
for _, permission := range permissions {
|
||||
if !s.HasPermission(ctx, permission) {
|
||||
return fmt.Errorf("insufficient permission: %s", permission)
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
@ -1,457 +0,0 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"strings"
|
||||
"time"
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/repository"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/runtime"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type executionService struct {
|
||||
executionRepo repository.ExecutionRepository
|
||||
functionRepo repository.FunctionRepository
|
||||
runtimeService RuntimeService
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewExecutionService(
|
||||
executionRepo repository.ExecutionRepository,
|
||||
functionRepo repository.FunctionRepository,
|
||||
runtimeService RuntimeService,
|
||||
logger *zap.Logger,
|
||||
) ExecutionService {
|
||||
return &executionService{
|
||||
executionRepo: executionRepo,
|
||||
functionRepo: functionRepo,
|
||||
runtimeService: runtimeService,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *executionService) Execute(ctx context.Context, req *domain.ExecuteFunctionRequest, userID string) (*domain.ExecuteFunctionResponse, error) {
|
||||
// Get function definition
|
||||
function, err := s.functionRepo.GetByID(ctx, req.FunctionID)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("function not found: %w", err)
|
||||
}
|
||||
|
||||
// Create execution record
|
||||
// Initialize input with empty JSON if nil or empty
|
||||
input := req.Input
|
||||
if input == nil || len(input) == 0 {
|
||||
input = json.RawMessage(`{}`)
|
||||
}
|
||||
|
||||
execution := &domain.FunctionExecution{
|
||||
ID: uuid.New(),
|
||||
FunctionID: req.FunctionID,
|
||||
Status: domain.StatusPending,
|
||||
Input: input,
|
||||
Output: json.RawMessage(`{}`), // Initialize with empty JSON object
|
||||
ExecutorID: userID,
|
||||
CreatedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Store execution
|
||||
createdExecution, err := s.executionRepo.Create(ctx, execution)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to create execution record",
|
||||
zap.String("function_id", req.FunctionID.String()),
|
||||
zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to create execution record: %w", err)
|
||||
}
|
||||
|
||||
if req.Async {
|
||||
// Start async execution
|
||||
go s.executeAsync(context.Background(), createdExecution, function)
|
||||
|
||||
return &domain.ExecuteFunctionResponse{
|
||||
ExecutionID: createdExecution.ID,
|
||||
Status: domain.StatusPending,
|
||||
}, nil
|
||||
} else {
|
||||
// Execute synchronously
|
||||
return s.executeSync(ctx, createdExecution, function)
|
||||
}
|
||||
}
|
||||
|
||||
func (s *executionService) executeSync(ctx context.Context, execution *domain.FunctionExecution, function *domain.FunctionDefinition) (*domain.ExecuteFunctionResponse, error) {
|
||||
// Update status to running
|
||||
execution.Status = domain.StatusRunning
|
||||
execution.StartedAt = &[]time.Time{time.Now()}[0]
|
||||
|
||||
if _, err := s.executionRepo.Update(ctx, execution.ID, execution); err != nil {
|
||||
s.logger.Warn("Failed to update execution status to running", zap.Error(err))
|
||||
}
|
||||
|
||||
// Get runtime backend
|
||||
backend, err := s.runtimeService.GetBackend(ctx, string(function.Runtime))
|
||||
if err != nil {
|
||||
execution.Status = domain.StatusFailed
|
||||
execution.Error = fmt.Sprintf("failed to get runtime backend: %v", err)
|
||||
s.updateExecutionComplete(ctx, execution)
|
||||
return &domain.ExecuteFunctionResponse{
|
||||
ExecutionID: execution.ID,
|
||||
Status: domain.StatusFailed,
|
||||
Error: execution.Error,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Create timeout context for execution
|
||||
execCtx := ctx
|
||||
var cancel context.CancelFunc
|
||||
if function.Timeout.Duration > 0 {
|
||||
execCtx, cancel = context.WithTimeout(ctx, function.Timeout.Duration)
|
||||
defer cancel()
|
||||
}
|
||||
|
||||
// Define log streaming callback
|
||||
logCallback := func(logs []string) error {
|
||||
s.logger.Info("Log streaming callback called",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Int("log_count", len(logs)),
|
||||
zap.Strings("logs_preview", logs))
|
||||
|
||||
// Update execution with current logs using background context
|
||||
// to ensure updates continue even after HTTP request completes
|
||||
// Create a copy of the execution to avoid race conditions
|
||||
execCopy := *execution
|
||||
execCopy.Logs = logs
|
||||
_, err := s.executionRepo.Update(context.Background(), execution.ID, &execCopy)
|
||||
if err == nil {
|
||||
// Only update the original if database update succeeds
|
||||
execution.Logs = logs
|
||||
s.logger.Info("Successfully updated execution with logs in database",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Int("log_count", len(logs)))
|
||||
} else {
|
||||
s.logger.Error("Failed to update execution with logs in database",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Error(err))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Check if backend supports log streaming
|
||||
type logStreamingBackend interface {
|
||||
ExecuteWithLogStreaming(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage, logCallback runtime.LogStreamCallback) (*domain.ExecutionResult, error)
|
||||
}
|
||||
|
||||
var result *domain.ExecutionResult
|
||||
if lsBackend, ok := backend.(logStreamingBackend); ok {
|
||||
s.logger.Info("Backend supports log streaming, using ExecuteWithLogStreaming",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.String("function_id", function.ID.String()))
|
||||
// Execute function with log streaming
|
||||
result, err = lsBackend.ExecuteWithLogStreaming(execCtx, function, execution.Input, logCallback)
|
||||
} else {
|
||||
s.logger.Info("Backend does not support log streaming, using regular Execute",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.String("function_id", function.ID.String()))
|
||||
// Fallback to regular execute
|
||||
result, err = backend.Execute(execCtx, function, execution.Input)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// Check if this was a timeout error
|
||||
if execCtx.Err() == context.DeadlineExceeded {
|
||||
execution.Status = domain.StatusTimeout
|
||||
execution.Error = fmt.Sprintf("function execution timed out after %v", function.Timeout.Duration)
|
||||
} else {
|
||||
execution.Status = domain.StatusFailed
|
||||
execution.Error = fmt.Sprintf("execution failed: %v", err)
|
||||
}
|
||||
s.updateExecutionComplete(ctx, execution)
|
||||
return &domain.ExecuteFunctionResponse{
|
||||
ExecutionID: execution.ID,
|
||||
Status: execution.Status,
|
||||
Error: execution.Error,
|
||||
}, nil
|
||||
}
|
||||
|
||||
// Update execution with results
|
||||
execution.Status = domain.StatusCompleted
|
||||
// Handle empty output
|
||||
if len(result.Output) == 0 {
|
||||
execution.Output = json.RawMessage(`{}`)
|
||||
} else {
|
||||
execution.Output = result.Output
|
||||
}
|
||||
execution.Error = result.Error
|
||||
execution.Duration = result.Duration
|
||||
execution.MemoryUsed = result.MemoryUsed
|
||||
execution.Logs = result.Logs
|
||||
|
||||
// Check if the result indicates a timeout
|
||||
if result.Error != "" {
|
||||
if strings.Contains(result.Error, "timed out") {
|
||||
execution.Status = domain.StatusTimeout
|
||||
} else {
|
||||
execution.Status = domain.StatusFailed
|
||||
}
|
||||
}
|
||||
|
||||
s.updateExecutionComplete(ctx, execution)
|
||||
|
||||
return &domain.ExecuteFunctionResponse{
|
||||
ExecutionID: execution.ID,
|
||||
Status: execution.Status,
|
||||
Output: execution.Output,
|
||||
Error: execution.Error,
|
||||
Duration: execution.Duration,
|
||||
MemoryUsed: execution.MemoryUsed,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *executionService) executeAsync(ctx context.Context, execution *domain.FunctionExecution, function *domain.FunctionDefinition) {
|
||||
// Update status to running
|
||||
execution.Status = domain.StatusRunning
|
||||
execution.StartedAt = &[]time.Time{time.Now()}[0]
|
||||
|
||||
if _, err := s.executionRepo.Update(ctx, execution.ID, execution); err != nil {
|
||||
s.logger.Warn("Failed to update execution status to running", zap.Error(err))
|
||||
}
|
||||
|
||||
// Get runtime backend
|
||||
backend, err := s.runtimeService.GetBackend(ctx, string(function.Runtime))
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to get runtime backend for async execution",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Error(err))
|
||||
execution.Status = domain.StatusFailed
|
||||
execution.Error = fmt.Sprintf("failed to get runtime backend: %v", err)
|
||||
s.updateExecutionComplete(ctx, execution)
|
||||
return
|
||||
}
|
||||
|
||||
// Create timeout context for execution
|
||||
execCtx := ctx
|
||||
var cancel context.CancelFunc
|
||||
if function.Timeout.Duration > 0 {
|
||||
execCtx, cancel = context.WithTimeout(ctx, function.Timeout.Duration)
|
||||
defer cancel()
|
||||
}
|
||||
|
||||
// Define log streaming callback
|
||||
logCallback := func(logs []string) error {
|
||||
s.logger.Info("Log streaming callback called",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Int("log_count", len(logs)),
|
||||
zap.Strings("logs_preview", logs))
|
||||
|
||||
// Update execution with current logs using background context
|
||||
// to ensure updates continue even after HTTP request completes
|
||||
// Create a copy of the execution to avoid race conditions
|
||||
execCopy := *execution
|
||||
execCopy.Logs = logs
|
||||
_, err := s.executionRepo.Update(context.Background(), execution.ID, &execCopy)
|
||||
if err == nil {
|
||||
// Only update the original if database update succeeds
|
||||
execution.Logs = logs
|
||||
s.logger.Info("Successfully updated execution with logs in database",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Int("log_count", len(logs)))
|
||||
} else {
|
||||
s.logger.Error("Failed to update execution with logs in database",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Error(err))
|
||||
}
|
||||
return err
|
||||
}
|
||||
|
||||
// Check if backend supports log streaming
|
||||
type logStreamingBackend interface {
|
||||
ExecuteWithLogStreaming(ctx context.Context, function *domain.FunctionDefinition, input json.RawMessage, logCallback runtime.LogStreamCallback) (*domain.ExecutionResult, error)
|
||||
}
|
||||
|
||||
var result *domain.ExecutionResult
|
||||
if lsBackend, ok := backend.(logStreamingBackend); ok {
|
||||
s.logger.Info("Backend supports log streaming, using ExecuteWithLogStreaming",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.String("function_id", function.ID.String()))
|
||||
// Execute function with log streaming
|
||||
result, err = lsBackend.ExecuteWithLogStreaming(execCtx, function, execution.Input, logCallback)
|
||||
} else {
|
||||
s.logger.Info("Backend does not support log streaming, using regular Execute",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.String("function_id", function.ID.String()))
|
||||
// Fallback to regular execute
|
||||
result, err = backend.Execute(execCtx, function, execution.Input)
|
||||
}
|
||||
|
||||
if err != nil {
|
||||
// Check if this was a timeout error
|
||||
if execCtx.Err() == context.DeadlineExceeded {
|
||||
execution.Status = domain.StatusTimeout
|
||||
execution.Error = fmt.Sprintf("function execution timed out after %v", function.Timeout.Duration)
|
||||
} else {
|
||||
execution.Status = domain.StatusFailed
|
||||
execution.Error = fmt.Sprintf("execution failed: %v", err)
|
||||
}
|
||||
s.updateExecutionComplete(ctx, execution)
|
||||
return
|
||||
}
|
||||
|
||||
// Update execution with results
|
||||
execution.Status = domain.StatusCompleted
|
||||
// Handle empty output
|
||||
if len(result.Output) == 0 {
|
||||
execution.Output = json.RawMessage(`{}`)
|
||||
} else {
|
||||
execution.Output = result.Output
|
||||
}
|
||||
execution.Error = result.Error
|
||||
execution.Duration = result.Duration
|
||||
execution.MemoryUsed = result.MemoryUsed
|
||||
execution.Logs = result.Logs
|
||||
|
||||
// Check if the result indicates a timeout
|
||||
if result.Error != "" {
|
||||
if strings.Contains(result.Error, "timed out") {
|
||||
execution.Status = domain.StatusTimeout
|
||||
} else {
|
||||
execution.Status = domain.StatusFailed
|
||||
}
|
||||
}
|
||||
|
||||
s.updateExecutionComplete(ctx, execution)
|
||||
|
||||
s.logger.Info("Async function execution completed",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.String("status", string(execution.Status)),
|
||||
zap.Duration("duration", execution.Duration))
|
||||
}
|
||||
|
||||
func (s *executionService) updateExecutionComplete(ctx context.Context, execution *domain.FunctionExecution) {
|
||||
execution.CompletedAt = &[]time.Time{time.Now()}[0]
|
||||
|
||||
if _, err := s.executionRepo.Update(ctx, execution.ID, execution); err != nil {
|
||||
s.logger.Error("Failed to update execution completion",
|
||||
zap.String("execution_id", execution.ID.String()),
|
||||
zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
func (s *executionService) GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionExecution, error) {
|
||||
execution, err := s.executionRepo.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("execution not found: %w", err)
|
||||
}
|
||||
return execution, nil
|
||||
}
|
||||
|
||||
func (s *executionService) List(ctx context.Context, functionID *uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error) {
|
||||
if limit <= 0 {
|
||||
limit = 50 // Default limit
|
||||
}
|
||||
if limit > 100 {
|
||||
limit = 100 // Max limit
|
||||
}
|
||||
|
||||
return s.executionRepo.List(ctx, functionID, limit, offset)
|
||||
}
|
||||
|
||||
func (s *executionService) GetByFunctionID(ctx context.Context, functionID uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error) {
|
||||
if limit <= 0 {
|
||||
limit = 50 // Default limit
|
||||
}
|
||||
if limit > 100 {
|
||||
limit = 100 // Max limit
|
||||
}
|
||||
|
||||
return s.executionRepo.GetByFunctionID(ctx, functionID, limit, offset)
|
||||
}
|
||||
|
||||
func (s *executionService) Cancel(ctx context.Context, id uuid.UUID, userID string) error {
|
||||
s.logger.Info("Canceling execution",
|
||||
zap.String("execution_id", id.String()),
|
||||
zap.String("user_id", userID))
|
||||
|
||||
// Get execution
|
||||
execution, err := s.executionRepo.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
return fmt.Errorf("execution not found: %w", err)
|
||||
}
|
||||
|
||||
// Check if execution is still running
|
||||
if execution.Status != domain.StatusRunning && execution.Status != domain.StatusPending {
|
||||
return fmt.Errorf("execution is not running (status: %s)", execution.Status)
|
||||
}
|
||||
|
||||
// Get function to determine runtime
|
||||
function, err := s.functionRepo.GetByID(ctx, execution.FunctionID)
|
||||
if err != nil {
|
||||
return fmt.Errorf("function not found: %w", err)
|
||||
}
|
||||
|
||||
// Stop execution in runtime
|
||||
backend, err := s.runtimeService.GetBackend(ctx, string(function.Runtime))
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to get runtime backend: %w", err)
|
||||
}
|
||||
|
||||
if err := backend.StopExecution(ctx, id); err != nil {
|
||||
s.logger.Warn("Failed to stop execution in runtime",
|
||||
zap.String("execution_id", id.String()),
|
||||
zap.Error(err))
|
||||
}
|
||||
|
||||
// Update execution status
|
||||
execution.Status = domain.StatusCanceled
|
||||
execution.Error = "execution canceled by user"
|
||||
execution.CompletedAt = &[]time.Time{time.Now()}[0]
|
||||
|
||||
if _, err := s.executionRepo.Update(ctx, execution.ID, execution); err != nil {
|
||||
return fmt.Errorf("failed to update execution status: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Execution canceled successfully",
|
||||
zap.String("execution_id", id.String()))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *executionService) GetLogs(ctx context.Context, id uuid.UUID) ([]string, error) {
|
||||
s.logger.Debug("GetLogs called in execution service",
|
||||
zap.String("execution_id", id.String()))
|
||||
|
||||
// Get execution with logs from database
|
||||
execution, err := s.executionRepo.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to get execution from database in GetLogs",
|
||||
zap.String("execution_id", id.String()),
|
||||
zap.Error(err))
|
||||
return nil, fmt.Errorf("execution not found: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Retrieved execution from database",
|
||||
zap.String("execution_id", id.String()),
|
||||
zap.String("status", string(execution.Status)),
|
||||
zap.Int("log_count", len(execution.Logs)),
|
||||
zap.Bool("logs_nil", execution.Logs == nil))
|
||||
|
||||
// Return logs from execution record
|
||||
if execution.Logs == nil {
|
||||
s.logger.Debug("Execution has nil logs, returning empty slice",
|
||||
zap.String("execution_id", id.String()))
|
||||
return []string{}, nil
|
||||
}
|
||||
|
||||
s.logger.Debug("Returning logs from execution",
|
||||
zap.String("execution_id", id.String()),
|
||||
zap.Int("log_count", len(execution.Logs)))
|
||||
|
||||
return execution.Logs, nil
|
||||
}
|
||||
|
||||
func (s *executionService) GetRunningExecutions(ctx context.Context) ([]*domain.FunctionExecution, error) {
|
||||
return s.executionRepo.GetRunningExecutions(ctx)
|
||||
}
|
||||
@ -1,253 +0,0 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"time"
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/repository"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
type functionService struct {
|
||||
functionRepo repository.FunctionRepository
|
||||
runtimeService RuntimeService
|
||||
logger *zap.Logger
|
||||
}
|
||||
|
||||
func NewFunctionService(functionRepo repository.FunctionRepository, runtimeService RuntimeService, logger *zap.Logger) FunctionService {
|
||||
return &functionService{
|
||||
functionRepo: functionRepo,
|
||||
runtimeService: runtimeService,
|
||||
logger: logger,
|
||||
}
|
||||
}
|
||||
|
||||
func (s *functionService) Create(ctx context.Context, req *domain.CreateFunctionRequest, userID string) (*domain.FunctionDefinition, error) {
|
||||
s.logger.Info("Creating new function",
|
||||
zap.String("name", req.Name),
|
||||
zap.String("app_id", req.AppID),
|
||||
zap.String("user_id", userID))
|
||||
|
||||
// Check if function with same name exists
|
||||
_, err := s.functionRepo.GetByName(ctx, req.AppID, req.Name)
|
||||
if err == nil {
|
||||
return nil, fmt.Errorf("function with name '%s' already exists in app '%s'", req.Name, req.AppID)
|
||||
}
|
||||
|
||||
// Validate runtime
|
||||
if !s.isValidRuntime(string(req.Runtime)) {
|
||||
return nil, fmt.Errorf("unsupported runtime: %s", req.Runtime)
|
||||
}
|
||||
|
||||
// Create function definition
|
||||
function := &domain.FunctionDefinition{
|
||||
ID: uuid.New(),
|
||||
Name: req.Name,
|
||||
AppID: req.AppID,
|
||||
Runtime: req.Runtime,
|
||||
Image: req.Image,
|
||||
Handler: req.Handler,
|
||||
Code: req.Code,
|
||||
Environment: req.Environment,
|
||||
Timeout: req.Timeout,
|
||||
Memory: req.Memory,
|
||||
Owner: req.Owner,
|
||||
CreatedAt: time.Now(),
|
||||
UpdatedAt: time.Now(),
|
||||
}
|
||||
|
||||
// Validate timeout and memory limits
|
||||
if function.Timeout.Duration < time.Second {
|
||||
return nil, fmt.Errorf("timeout must be at least 1 second")
|
||||
}
|
||||
if function.Timeout.Duration > 15*time.Minute {
|
||||
return nil, fmt.Errorf("timeout cannot exceed 15 minutes")
|
||||
}
|
||||
if function.Memory < 64 || function.Memory > 3008 {
|
||||
return nil, fmt.Errorf("memory must be between 64 and 3008 MB")
|
||||
}
|
||||
|
||||
// Store function
|
||||
created, err := s.functionRepo.Create(ctx, function)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to create function",
|
||||
zap.String("name", req.Name),
|
||||
zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to create function: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Function created successfully",
|
||||
zap.String("function_id", created.ID.String()),
|
||||
zap.String("name", created.Name))
|
||||
|
||||
return created, nil
|
||||
}
|
||||
|
||||
func (s *functionService) GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionDefinition, error) {
|
||||
function, err := s.functionRepo.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("function not found: %w", err)
|
||||
}
|
||||
return function, nil
|
||||
}
|
||||
|
||||
func (s *functionService) GetByName(ctx context.Context, appID, name string) (*domain.FunctionDefinition, error) {
|
||||
function, err := s.functionRepo.GetByName(ctx, appID, name)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("function not found: %w", err)
|
||||
}
|
||||
return function, nil
|
||||
}
|
||||
|
||||
func (s *functionService) Update(ctx context.Context, id uuid.UUID, req *domain.UpdateFunctionRequest, userID string) (*domain.FunctionDefinition, error) {
|
||||
s.logger.Info("Updating function",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.String("user_id", userID))
|
||||
|
||||
// Get existing function
|
||||
_, err := s.functionRepo.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("function not found: %w", err)
|
||||
}
|
||||
|
||||
// Validate runtime if being updated
|
||||
if req.Runtime != nil && !s.isValidRuntime(string(*req.Runtime)) {
|
||||
return nil, fmt.Errorf("unsupported runtime: %s", *req.Runtime)
|
||||
}
|
||||
|
||||
// Validate timeout and memory if being updated
|
||||
if req.Timeout != nil {
|
||||
if req.Timeout.Duration < time.Second {
|
||||
return nil, fmt.Errorf("timeout must be at least 1 second")
|
||||
}
|
||||
if req.Timeout.Duration > 15*time.Minute {
|
||||
return nil, fmt.Errorf("timeout cannot exceed 15 minutes")
|
||||
}
|
||||
}
|
||||
if req.Memory != nil && (*req.Memory < 64 || *req.Memory > 3008) {
|
||||
return nil, fmt.Errorf("memory must be between 64 and 3008 MB")
|
||||
}
|
||||
|
||||
// Update function
|
||||
updated, err := s.functionRepo.Update(ctx, id, req)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to update function",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to update function: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Function updated successfully",
|
||||
zap.String("function_id", id.String()))
|
||||
|
||||
return updated, nil
|
||||
}
|
||||
|
||||
func (s *functionService) Delete(ctx context.Context, id uuid.UUID, userID string) error {
|
||||
s.logger.Info("Deleting function",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.String("user_id", userID))
|
||||
|
||||
// Get function to determine runtime
|
||||
function, err := s.functionRepo.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
return fmt.Errorf("function not found: %w", err)
|
||||
}
|
||||
|
||||
// Clean up runtime resources
|
||||
backend, err := s.runtimeService.GetBackend(ctx, string(function.Runtime))
|
||||
if err != nil {
|
||||
s.logger.Warn("Failed to get runtime backend for cleanup", zap.Error(err))
|
||||
} else {
|
||||
if err := backend.Remove(ctx, id); err != nil {
|
||||
s.logger.Warn("Failed to remove runtime resources",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.Error(err))
|
||||
}
|
||||
}
|
||||
|
||||
// Delete function
|
||||
if err := s.functionRepo.Delete(ctx, id); err != nil {
|
||||
s.logger.Error("Failed to delete function",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.Error(err))
|
||||
return fmt.Errorf("failed to delete function: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Function deleted successfully",
|
||||
zap.String("function_id", id.String()))
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *functionService) List(ctx context.Context, appID string, limit, offset int) ([]*domain.FunctionDefinition, error) {
|
||||
if limit <= 0 {
|
||||
limit = 50 // Default limit
|
||||
}
|
||||
if limit > 100 {
|
||||
limit = 100 // Max limit
|
||||
}
|
||||
|
||||
return s.functionRepo.List(ctx, appID, limit, offset)
|
||||
}
|
||||
|
||||
func (s *functionService) GetByAppID(ctx context.Context, appID string) ([]*domain.FunctionDefinition, error) {
|
||||
return s.functionRepo.GetByAppID(ctx, appID)
|
||||
}
|
||||
|
||||
func (s *functionService) Deploy(ctx context.Context, id uuid.UUID, req *domain.DeployFunctionRequest, userID string) (*domain.DeployFunctionResponse, error) {
|
||||
s.logger.Info("Deploying function",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.String("user_id", userID),
|
||||
zap.Bool("force", req.Force))
|
||||
|
||||
// Get function
|
||||
function, err := s.functionRepo.GetByID(ctx, id)
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("function not found: %w", err)
|
||||
}
|
||||
|
||||
// Get runtime backend
|
||||
backend, err := s.runtimeService.GetBackend(ctx, string(function.Runtime))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("failed to get runtime backend: %w", err)
|
||||
}
|
||||
|
||||
// Deploy function
|
||||
if err := backend.Deploy(ctx, function); err != nil {
|
||||
s.logger.Error("Failed to deploy function",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.Error(err))
|
||||
return nil, fmt.Errorf("failed to deploy function: %w", err)
|
||||
}
|
||||
|
||||
s.logger.Info("Function deployed successfully",
|
||||
zap.String("function_id", id.String()),
|
||||
zap.String("image", function.Image))
|
||||
|
||||
return &domain.DeployFunctionResponse{
|
||||
Status: "deployed",
|
||||
Message: "Function deployed successfully",
|
||||
Image: function.Image,
|
||||
}, nil
|
||||
}
|
||||
|
||||
func (s *functionService) isValidRuntime(runtimeType string) bool {
|
||||
validRuntimes := []string{
|
||||
string(domain.RuntimeNodeJS18),
|
||||
string(domain.RuntimePython39),
|
||||
string(domain.RuntimeGo120),
|
||||
string(domain.RuntimeCustom),
|
||||
}
|
||||
|
||||
for _, valid := range validRuntimes {
|
||||
if runtimeType == valid {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
@ -1,48 +0,0 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"context"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/runtime"
|
||||
"github.com/google/uuid"
|
||||
)
|
||||
|
||||
// FunctionService provides business logic for function management
|
||||
type FunctionService interface {
|
||||
Create(ctx context.Context, req *domain.CreateFunctionRequest, userID string) (*domain.FunctionDefinition, error)
|
||||
GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionDefinition, error)
|
||||
GetByName(ctx context.Context, appID, name string) (*domain.FunctionDefinition, error)
|
||||
Update(ctx context.Context, id uuid.UUID, req *domain.UpdateFunctionRequest, userID string) (*domain.FunctionDefinition, error)
|
||||
Delete(ctx context.Context, id uuid.UUID, userID string) error
|
||||
List(ctx context.Context, appID string, limit, offset int) ([]*domain.FunctionDefinition, error)
|
||||
GetByAppID(ctx context.Context, appID string) ([]*domain.FunctionDefinition, error)
|
||||
Deploy(ctx context.Context, id uuid.UUID, req *domain.DeployFunctionRequest, userID string) (*domain.DeployFunctionResponse, error)
|
||||
}
|
||||
|
||||
// ExecutionService provides business logic for function execution
|
||||
type ExecutionService interface {
|
||||
Execute(ctx context.Context, req *domain.ExecuteFunctionRequest, userID string) (*domain.ExecuteFunctionResponse, error)
|
||||
GetByID(ctx context.Context, id uuid.UUID) (*domain.FunctionExecution, error)
|
||||
List(ctx context.Context, functionID *uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error)
|
||||
GetByFunctionID(ctx context.Context, functionID uuid.UUID, limit, offset int) ([]*domain.FunctionExecution, error)
|
||||
Cancel(ctx context.Context, id uuid.UUID, userID string) error
|
||||
GetLogs(ctx context.Context, id uuid.UUID) ([]string, error)
|
||||
GetRunningExecutions(ctx context.Context) ([]*domain.FunctionExecution, error)
|
||||
}
|
||||
|
||||
// RuntimeService provides runtime management capabilities
|
||||
type RuntimeService interface {
|
||||
GetBackend(ctx context.Context, runtimeType string) (runtime.RuntimeBackend, error)
|
||||
ListSupportedRuntimes(ctx context.Context) ([]*domain.RuntimeInfo, error)
|
||||
HealthCheck(ctx context.Context, runtimeType string) error
|
||||
GetRuntimeInfo(ctx context.Context, runtimeType string) (*runtime.RuntimeInfo, error)
|
||||
ListContainers(ctx context.Context, runtimeType string) ([]runtime.ContainerInfo, error)
|
||||
}
|
||||
|
||||
// AuthService provides authentication and authorization
|
||||
type AuthService interface {
|
||||
GetAuthContext(ctx context.Context) (*domain.AuthContext, error)
|
||||
HasPermission(ctx context.Context, permission string) bool
|
||||
ValidatePermissions(ctx context.Context, permissions []string) error
|
||||
}
|
||||
@ -1,198 +0,0 @@
|
||||
package services
|
||||
|
||||
import (
|
||||
"context"
|
||||
"fmt"
|
||||
"sync"
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/runtime"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/runtime/docker"
|
||||
)
|
||||
|
||||
type runtimeService struct {
|
||||
backends map[string]runtime.RuntimeBackend
|
||||
mutex sync.RWMutex
|
||||
logger *zap.Logger
|
||||
config *RuntimeConfig
|
||||
}
|
||||
|
||||
type RuntimeConfig struct {
|
||||
DefaultRuntime string `json:"default_runtime"`
|
||||
Backends map[string]map[string]interface{} `json:"backends"`
|
||||
}
|
||||
|
||||
func NewRuntimeService(logger *zap.Logger, config *RuntimeConfig) RuntimeService {
|
||||
if config == nil {
|
||||
config = &RuntimeConfig{
|
||||
DefaultRuntime: "docker",
|
||||
Backends: make(map[string]map[string]interface{}),
|
||||
}
|
||||
}
|
||||
|
||||
service := &runtimeService{
|
||||
backends: make(map[string]runtime.RuntimeBackend),
|
||||
logger: logger,
|
||||
config: config,
|
||||
}
|
||||
|
||||
// Initialize default Docker backend
|
||||
if err := service.initializeDockerBackend(); err != nil {
|
||||
logger.Warn("Failed to initialize Docker backend", zap.Error(err))
|
||||
}
|
||||
|
||||
return service
|
||||
}
|
||||
|
||||
func (s *runtimeService) initializeDockerBackend() error {
|
||||
// Use simple Docker backend for now
|
||||
dockerBackend, err := docker.NewSimpleDockerRuntime(s.logger)
|
||||
if err != nil {
|
||||
s.logger.Error("Failed to create Docker runtime", zap.Error(err))
|
||||
return err
|
||||
}
|
||||
|
||||
s.mutex.Lock()
|
||||
s.backends["docker"] = dockerBackend
|
||||
s.mutex.Unlock()
|
||||
|
||||
s.logger.Info("Simple Docker runtime backend initialized")
|
||||
return nil
|
||||
}
|
||||
|
||||
func (s *runtimeService) GetBackend(ctx context.Context, runtimeType string) (runtime.RuntimeBackend, error) {
|
||||
// Map domain runtime types to backend types
|
||||
backendType := s.mapRuntimeToBackend(runtimeType)
|
||||
|
||||
s.mutex.RLock()
|
||||
backend, exists := s.backends[backendType]
|
||||
s.mutex.RUnlock()
|
||||
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("runtime backend '%s' not available", backendType)
|
||||
}
|
||||
|
||||
// Check backend health
|
||||
if err := backend.HealthCheck(ctx); err != nil {
|
||||
s.logger.Warn("Runtime backend health check failed",
|
||||
zap.String("backend", backendType),
|
||||
zap.Error(err))
|
||||
return nil, fmt.Errorf("runtime backend '%s' is not healthy: %w", backendType, err)
|
||||
}
|
||||
|
||||
return backend, nil
|
||||
}
|
||||
|
||||
func (s *runtimeService) ListSupportedRuntimes(ctx context.Context) ([]*domain.RuntimeInfo, error) {
|
||||
runtimes := []*domain.RuntimeInfo{
|
||||
{
|
||||
Type: domain.RuntimeNodeJS18,
|
||||
Version: "18.x",
|
||||
Available: s.isRuntimeAvailable(ctx, "nodejs18"),
|
||||
DefaultImage: "node:18-alpine",
|
||||
Description: "Node.js 18.x runtime with Alpine Linux",
|
||||
},
|
||||
{
|
||||
Type: domain.RuntimePython39,
|
||||
Version: "3.9.x",
|
||||
Available: s.isRuntimeAvailable(ctx, "python3.9"),
|
||||
DefaultImage: "python:3.9-alpine",
|
||||
Description: "Python 3.9.x runtime with Alpine Linux",
|
||||
},
|
||||
{
|
||||
Type: domain.RuntimeGo120,
|
||||
Version: "1.20.x",
|
||||
Available: s.isRuntimeAvailable(ctx, "go1.20"),
|
||||
DefaultImage: "golang:1.20-alpine",
|
||||
Description: "Go 1.20.x runtime with Alpine Linux",
|
||||
},
|
||||
{
|
||||
Type: domain.RuntimeCustom,
|
||||
Version: "custom",
|
||||
Available: s.isRuntimeAvailable(ctx, "custom"),
|
||||
DefaultImage: "alpine:latest",
|
||||
Description: "Custom runtime with user-defined image",
|
||||
},
|
||||
}
|
||||
|
||||
return runtimes, nil
|
||||
}
|
||||
|
||||
func (s *runtimeService) HealthCheck(ctx context.Context, runtimeType string) error {
|
||||
backendType := s.mapRuntimeToBackend(runtimeType)
|
||||
|
||||
s.mutex.RLock()
|
||||
backend, exists := s.backends[backendType]
|
||||
s.mutex.RUnlock()
|
||||
|
||||
if !exists {
|
||||
return fmt.Errorf("runtime backend '%s' not available", backendType)
|
||||
}
|
||||
|
||||
return backend.HealthCheck(ctx)
|
||||
}
|
||||
|
||||
func (s *runtimeService) GetRuntimeInfo(ctx context.Context, runtimeType string) (*runtime.RuntimeInfo, error) {
|
||||
backendType := s.mapRuntimeToBackend(runtimeType)
|
||||
|
||||
s.mutex.RLock()
|
||||
backend, exists := s.backends[backendType]
|
||||
s.mutex.RUnlock()
|
||||
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("runtime backend '%s' not available", backendType)
|
||||
}
|
||||
|
||||
return backend.GetInfo(ctx)
|
||||
}
|
||||
|
||||
func (s *runtimeService) ListContainers(ctx context.Context, runtimeType string) ([]runtime.ContainerInfo, error) {
|
||||
backendType := s.mapRuntimeToBackend(runtimeType)
|
||||
|
||||
s.mutex.RLock()
|
||||
backend, exists := s.backends[backendType]
|
||||
s.mutex.RUnlock()
|
||||
|
||||
if !exists {
|
||||
return nil, fmt.Errorf("runtime backend '%s' not available", backendType)
|
||||
}
|
||||
|
||||
return backend.ListContainers(ctx)
|
||||
}
|
||||
|
||||
func (s *runtimeService) mapRuntimeToBackend(runtimeType string) string {
|
||||
// For now, all runtimes use Docker backend
|
||||
// In the future, we could support different backends for different runtimes
|
||||
switch runtimeType {
|
||||
case string(domain.RuntimeNodeJS18):
|
||||
return "docker"
|
||||
case string(domain.RuntimePython39):
|
||||
return "docker"
|
||||
case string(domain.RuntimeGo120):
|
||||
return "docker"
|
||||
case string(domain.RuntimeCustom):
|
||||
return "docker"
|
||||
default:
|
||||
return s.config.DefaultRuntime
|
||||
}
|
||||
}
|
||||
|
||||
func (s *runtimeService) isRuntimeAvailable(ctx context.Context, runtimeType string) bool {
|
||||
backendType := s.mapRuntimeToBackend(runtimeType)
|
||||
|
||||
s.mutex.RLock()
|
||||
backend, exists := s.backends[backendType]
|
||||
s.mutex.RUnlock()
|
||||
|
||||
if !exists {
|
||||
return false
|
||||
}
|
||||
|
||||
if err := backend.HealthCheck(ctx); err != nil {
|
||||
return false
|
||||
}
|
||||
|
||||
return true
|
||||
}
|
||||
@ -1,23 +0,0 @@
|
||||
-- Drop view
|
||||
DROP VIEW IF EXISTS function_stats;
|
||||
|
||||
-- Drop triggers
|
||||
DROP TRIGGER IF EXISTS update_functions_updated_at ON functions;
|
||||
DROP FUNCTION IF EXISTS update_updated_at_column();
|
||||
|
||||
-- Drop indexes
|
||||
DROP INDEX IF EXISTS idx_executions_created_at;
|
||||
DROP INDEX IF EXISTS idx_executions_executor_id;
|
||||
DROP INDEX IF EXISTS idx_executions_status;
|
||||
DROP INDEX IF EXISTS idx_executions_function_id;
|
||||
|
||||
DROP INDEX IF EXISTS idx_functions_created_at;
|
||||
DROP INDEX IF EXISTS idx_functions_runtime;
|
||||
DROP INDEX IF EXISTS idx_functions_app_id;
|
||||
|
||||
-- Drop tables
|
||||
DROP TABLE IF EXISTS executions;
|
||||
DROP TABLE IF EXISTS functions;
|
||||
|
||||
-- Drop extension (only if no other tables use it)
|
||||
-- DROP EXTENSION IF EXISTS "uuid-ossp";
|
||||
@ -1,84 +0,0 @@
|
||||
-- Create UUID extension
|
||||
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
|
||||
|
||||
-- Create functions table
|
||||
CREATE TABLE functions (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
name VARCHAR(255) NOT NULL,
|
||||
app_id VARCHAR(255) NOT NULL,
|
||||
runtime VARCHAR(50) NOT NULL,
|
||||
image VARCHAR(500) NOT NULL,
|
||||
handler VARCHAR(255) NOT NULL,
|
||||
code TEXT,
|
||||
environment JSONB DEFAULT '{}',
|
||||
timeout INTERVAL NOT NULL DEFAULT '30 seconds',
|
||||
memory INTEGER NOT NULL DEFAULT 128,
|
||||
owner_type VARCHAR(50) NOT NULL,
|
||||
owner_name VARCHAR(255) NOT NULL,
|
||||
owner_owner VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
updated_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
|
||||
UNIQUE(app_id, name)
|
||||
);
|
||||
|
||||
-- Create executions table
|
||||
CREATE TABLE executions (
|
||||
id UUID PRIMARY KEY DEFAULT uuid_generate_v4(),
|
||||
function_id UUID NOT NULL REFERENCES functions(id) ON DELETE CASCADE,
|
||||
status VARCHAR(50) NOT NULL DEFAULT 'pending',
|
||||
input JSONB,
|
||||
output JSONB,
|
||||
error TEXT,
|
||||
duration INTERVAL,
|
||||
memory_used INTEGER,
|
||||
container_id VARCHAR(255),
|
||||
executor_id VARCHAR(255) NOT NULL,
|
||||
created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP,
|
||||
started_at TIMESTAMP,
|
||||
completed_at TIMESTAMP
|
||||
);
|
||||
|
||||
-- Create indexes for better query performance
|
||||
CREATE INDEX idx_functions_app_id ON functions(app_id);
|
||||
CREATE INDEX idx_functions_runtime ON functions(runtime);
|
||||
CREATE INDEX idx_functions_created_at ON functions(created_at);
|
||||
|
||||
CREATE INDEX idx_executions_function_id ON executions(function_id);
|
||||
CREATE INDEX idx_executions_status ON executions(status);
|
||||
CREATE INDEX idx_executions_executor_id ON executions(executor_id);
|
||||
CREATE INDEX idx_executions_created_at ON executions(created_at);
|
||||
|
||||
-- Create trigger to update updated_at timestamp
|
||||
CREATE OR REPLACE FUNCTION update_updated_at_column()
|
||||
RETURNS TRIGGER AS $$
|
||||
BEGIN
|
||||
NEW.updated_at = CURRENT_TIMESTAMP;
|
||||
RETURN NEW;
|
||||
END;
|
||||
$$ language 'plpgsql';
|
||||
|
||||
CREATE TRIGGER update_functions_updated_at
|
||||
BEFORE UPDATE ON functions
|
||||
FOR EACH ROW EXECUTE FUNCTION update_updated_at_column();
|
||||
|
||||
-- Insert some default runtime configurations
|
||||
COMMENT ON TABLE functions IS 'Function definitions and configurations';
|
||||
COMMENT ON TABLE executions IS 'Function execution records and results';
|
||||
|
||||
-- Create a view for function statistics
|
||||
CREATE OR REPLACE VIEW function_stats AS
|
||||
SELECT
|
||||
f.id,
|
||||
f.name,
|
||||
f.app_id,
|
||||
f.runtime,
|
||||
COUNT(e.id) as total_executions,
|
||||
COUNT(CASE WHEN e.status = 'completed' THEN 1 END) as successful_executions,
|
||||
COUNT(CASE WHEN e.status = 'failed' THEN 1 END) as failed_executions,
|
||||
COUNT(CASE WHEN e.status = 'running' THEN 1 END) as running_executions,
|
||||
AVG(EXTRACT(epoch FROM e.duration)) as avg_duration_seconds,
|
||||
MAX(e.created_at) as last_execution_at
|
||||
FROM functions f
|
||||
LEFT JOIN executions e ON f.id = e.function_id
|
||||
GROUP BY f.id, f.name, f.app_id, f.runtime;
|
||||
@ -1,2 +0,0 @@
|
||||
-- Remove logs column from executions table
|
||||
ALTER TABLE executions DROP COLUMN IF EXISTS logs;
|
||||
@ -1,2 +0,0 @@
|
||||
-- Add logs column to executions table to store function execution logs
|
||||
ALTER TABLE executions ADD COLUMN logs TEXT[];
|
||||
@ -1,76 +0,0 @@
|
||||
package integration
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/RyanCopley/skybridge/faas/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/faas/internal/runtime/docker"
|
||||
"go.uber.org/zap"
|
||||
)
|
||||
|
||||
func TestDockerRuntimeIntegration(t *testing.T) {
|
||||
// Create a logger for testing
|
||||
logger, err := zap.NewDevelopment()
|
||||
if err != nil {
|
||||
t.Fatalf("Failed to create logger: %v", err)
|
||||
}
|
||||
defer logger.Sync()
|
||||
|
||||
// Create the Docker runtime
|
||||
runtime, err := docker.NewSimpleDockerRuntime(logger)
|
||||
if err != nil {
|
||||
t.Skipf("Skipping test - Docker not available: %v", err)
|
||||
}
|
||||
|
||||
// Test health check
|
||||
ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
|
||||
defer cancel()
|
||||
|
||||
if err := runtime.HealthCheck(ctx); err != nil {
|
||||
t.Errorf("Docker runtime health check failed: %v", err)
|
||||
}
|
||||
|
||||
// Get runtime info
|
||||
info, err := runtime.GetInfo(ctx)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to get runtime info: %v", err)
|
||||
} else {
|
||||
t.Logf("Runtime Info: Type=%s, Version=%s, Available=%t", info.Type, info.Version, info.Available)
|
||||
}
|
||||
|
||||
// Test with a simple function (using alpine image)
|
||||
function := &domain.FunctionDefinition{
|
||||
Name: "test-function",
|
||||
Image: "alpine:latest",
|
||||
Timeout: domain.Duration{Duration: 30 * time.Second},
|
||||
Memory: 128, // 128MB
|
||||
}
|
||||
|
||||
// Deploy the function (pull the image)
|
||||
t.Log("Deploying function...")
|
||||
if err := runtime.Deploy(ctx, function); err != nil {
|
||||
t.Errorf("Failed to deploy function: %v", err)
|
||||
}
|
||||
|
||||
// Test execution with a simple command
|
||||
input := json.RawMessage(`{"cmd": "echo Hello World"}`)
|
||||
|
||||
t.Log("Executing function...")
|
||||
result, err := runtime.Execute(ctx, function, input)
|
||||
if err != nil {
|
||||
t.Errorf("Failed to execute function: %v", err)
|
||||
} else {
|
||||
t.Logf("Execution result: Duration=%v, Error=%s", result.Duration, result.Error)
|
||||
t.Logf("Output: %s", string(result.Output))
|
||||
t.Logf("Logs: %v", result.Logs)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHelloWorldFunction(t *testing.T) {
|
||||
// This test would require the hello-world-function image to be built
|
||||
// For now, we'll skip it
|
||||
t.Skip("Skipping hello world function test - requires custom image")
|
||||
}
|
||||
@ -1,16 +0,0 @@
|
||||
module.exports.handler = async (input, context) => {
|
||||
console.log("Starting function execution");
|
||||
|
||||
for (let i = 1; i <= 10; i++) {
|
||||
console.log(`Processing step ${i}`);
|
||||
// Wait 1 second between log outputs
|
||||
await new Promise(resolve => setTimeout(resolve, 1000));
|
||||
}
|
||||
|
||||
console.log("Function execution completed");
|
||||
|
||||
return {
|
||||
message: "Function executed successfully",
|
||||
steps: 10
|
||||
};
|
||||
};
|
||||
@ -1,16 +0,0 @@
|
||||
import time
|
||||
|
||||
def handler(input, context):
|
||||
print("Starting function execution")
|
||||
|
||||
for i in range(1, 11):
|
||||
print(f"Processing step {i}")
|
||||
# Wait 1 second between log outputs
|
||||
time.sleep(1)
|
||||
|
||||
print("Function execution completed")
|
||||
|
||||
return {
|
||||
"message": "Function executed successfully",
|
||||
"steps": 10
|
||||
}
|
||||
1
faas/web/.gitignore
vendored
1
faas/web/.gitignore
vendored
@ -1 +0,0 @@
|
||||
dist
|
||||
@ -1,31 +0,0 @@
|
||||
# Build stage
|
||||
FROM node:18-alpine AS builder
|
||||
|
||||
WORKDIR /app
|
||||
|
||||
# Copy package files
|
||||
COPY package*.json ./
|
||||
|
||||
# Install dependencies
|
||||
RUN npm install
|
||||
|
||||
# Copy source code
|
||||
COPY . .
|
||||
|
||||
# Build the application
|
||||
RUN npm run build
|
||||
|
||||
# Production stage
|
||||
FROM nginx:alpine
|
||||
|
||||
# Copy built files from builder stage
|
||||
COPY --from=builder /app/dist /usr/share/nginx/html
|
||||
|
||||
# Copy nginx configuration
|
||||
COPY nginx.conf /etc/nginx/conf.d/default.conf
|
||||
|
||||
# Expose port
|
||||
EXPOSE 80
|
||||
|
||||
# Start nginx
|
||||
CMD ["nginx", "-g", "daemon off;"]
|
||||
@ -1,60 +0,0 @@
|
||||
server {
|
||||
listen 80;
|
||||
server_name localhost;
|
||||
|
||||
root /usr/share/nginx/html;
|
||||
index index.html;
|
||||
|
||||
# Enable gzip compression
|
||||
gzip on;
|
||||
gzip_vary on;
|
||||
gzip_min_length 1024;
|
||||
gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
|
||||
|
||||
# Security headers
|
||||
add_header X-Frame-Options "SAMEORIGIN" always;
|
||||
add_header X-XSS-Protection "1; mode=block" always;
|
||||
add_header X-Content-Type-Options "nosniff" always;
|
||||
add_header Referrer-Policy "no-referrer-when-downgrade" always;
|
||||
|
||||
# CORS headers for Module Federation
|
||||
add_header Access-Control-Allow-Origin "*" always;
|
||||
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS" always;
|
||||
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization" always;
|
||||
|
||||
# Handle preflight OPTIONS requests
|
||||
if ($request_method = 'OPTIONS') {
|
||||
add_header Access-Control-Allow-Origin "*";
|
||||
add_header Access-Control-Allow-Methods "GET, POST, PUT, DELETE, OPTIONS";
|
||||
add_header Access-Control-Allow-Headers "DNT,User-Agent,X-Requested-With,If-Modified-Since,Cache-Control,Content-Type,Range,Authorization";
|
||||
add_header Access-Control-Max-Age 1728000;
|
||||
add_header Content-Type "text/plain; charset=utf-8";
|
||||
add_header Content-Length 0;
|
||||
return 204;
|
||||
}
|
||||
|
||||
# Main location
|
||||
location / {
|
||||
try_files $uri $uri/ /index.html;
|
||||
}
|
||||
|
||||
# Handle .js files with correct MIME type
|
||||
location ~* \.js$ {
|
||||
add_header Content-Type application/javascript;
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# Cache static assets
|
||||
location ~* \.(js|css|png|jpg|jpeg|gif|ico|svg)$ {
|
||||
expires 1y;
|
||||
add_header Cache-Control "public, immutable";
|
||||
try_files $uri =404;
|
||||
}
|
||||
|
||||
# Health check endpoint
|
||||
location /health {
|
||||
access_log off;
|
||||
return 200 "healthy\n";
|
||||
add_header Content-Type text/plain;
|
||||
}
|
||||
}
|
||||
6042
faas/web/package-lock.json
generated
6042
faas/web/package-lock.json
generated
File diff suppressed because it is too large
Load Diff
@ -1,43 +0,0 @@
|
||||
{
|
||||
"name": "faas-web",
|
||||
"version": "1.0.0",
|
||||
"private": true,
|
||||
"dependencies": {
|
||||
"@mantine/code-highlight": "^7.0.0",
|
||||
"@mantine/core": "^7.0.0",
|
||||
"@mantine/dates": "^7.0.0",
|
||||
"@mantine/form": "^7.0.0",
|
||||
"@mantine/hooks": "^7.0.0",
|
||||
"@mantine/modals": "^7.0.0",
|
||||
"@mantine/notifications": "^7.0.0",
|
||||
"@monaco-editor/react": "^4.7.0",
|
||||
"@tabler/icons-react": "^2.40.0",
|
||||
"axios": "^1.11.0",
|
||||
"dayjs": "^1.11.13",
|
||||
"monaco-editor": "^0.52.2",
|
||||
"react": "^18.2.0",
|
||||
"react-dom": "^18.2.0",
|
||||
"react-router-dom": "^6.8.0",
|
||||
"@skybridge/web-components": "workspace:*"
|
||||
},
|
||||
"devDependencies": {
|
||||
"@babel/core": "^7.22.0",
|
||||
"@babel/preset-react": "^7.22.0",
|
||||
"@babel/preset-typescript": "^7.22.0",
|
||||
"@types/react": "^18.2.0",
|
||||
"@types/react-dom": "^18.2.0",
|
||||
"babel-loader": "^9.1.0",
|
||||
"css-loader": "^6.8.0",
|
||||
"html-webpack-plugin": "^5.5.0",
|
||||
"style-loader": "^3.3.0",
|
||||
"typescript": "^5.1.0",
|
||||
"webpack": "^5.88.0",
|
||||
"webpack-cli": "^5.1.0",
|
||||
"webpack-dev-server": "^4.15.0"
|
||||
},
|
||||
"scripts": {
|
||||
"start": "webpack serve --mode development",
|
||||
"build": "webpack --mode production",
|
||||
"dev": "webpack serve --mode development"
|
||||
}
|
||||
}
|
||||
@ -1,11 +0,0 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en">
|
||||
<head>
|
||||
<meta charset="UTF-8">
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
||||
<title>FaaS - Function as a Service</title>
|
||||
</head>
|
||||
<body>
|
||||
<div id="root"></div>
|
||||
</body>
|
||||
</html>
|
||||
@ -1,227 +0,0 @@
|
||||
import React, { useState } from 'react';
|
||||
import { Box, Title, Tabs, ActionIcon, Group, Select } from '@mantine/core';
|
||||
import { SidebarLayout } from '@skybridge/web-components';
|
||||
import {
|
||||
IconFunction,
|
||||
IconPlayerPlay,
|
||||
IconStar,
|
||||
IconStarFilled
|
||||
} from '@tabler/icons-react';
|
||||
import { FunctionList } from './components/FunctionList';
|
||||
import { FunctionSidebar } from './components/FunctionSidebar';
|
||||
import { ExecutionSidebar } from './components/ExecutionSidebar';
|
||||
import ExecutionList from './components/ExecutionList';
|
||||
import { FunctionDefinition } from './types';
|
||||
|
||||
const App: React.FC = () => {
|
||||
// Determine current route based on pathname
|
||||
const getCurrentRoute = () => {
|
||||
const path = window.location.pathname;
|
||||
if (path.includes('/functions')) return 'functions';
|
||||
if (path.includes('/executions')) return 'executions';
|
||||
return 'functions';
|
||||
};
|
||||
|
||||
const [currentRoute, setCurrentRoute] = useState(getCurrentRoute());
|
||||
const [isFavorited, setIsFavorited] = useState(false);
|
||||
const [selectedColor, setSelectedColor] = useState('');
|
||||
const [functionSidebarOpened, setFunctionSidebarOpened] = useState(false);
|
||||
const [executionSidebarOpened, setExecutionSidebarOpened] = useState(false);
|
||||
const [editingFunction, setEditingFunction] = useState<FunctionDefinition | null>(null);
|
||||
const [executingFunction, setExecutingFunction] = useState<FunctionDefinition | null>(null);
|
||||
const [refreshKey, setRefreshKey] = useState(0);
|
||||
|
||||
// Listen for URL changes (for when the shell navigates)
|
||||
React.useEffect(() => {
|
||||
const handlePopState = () => {
|
||||
setCurrentRoute(getCurrentRoute());
|
||||
};
|
||||
|
||||
window.addEventListener('popstate', handlePopState);
|
||||
return () => window.removeEventListener('popstate', handlePopState);
|
||||
}, []);
|
||||
|
||||
const handleTabChange = (value: string | null) => {
|
||||
if (value) {
|
||||
// Use history.pushState to update URL and notify shell router
|
||||
const basePath = '/app/faas';
|
||||
const newPath = `${basePath}/${value}`;
|
||||
|
||||
// Update the URL and internal state
|
||||
window.history.pushState(null, '', newPath);
|
||||
setCurrentRoute(value);
|
||||
|
||||
// Dispatch a custom event so shell can respond if needed
|
||||
window.dispatchEvent(new PopStateEvent('popstate', { state: null }));
|
||||
}
|
||||
};
|
||||
|
||||
const handleCreateFunction = () => {
|
||||
setEditingFunction(null);
|
||||
setFunctionSidebarOpened(true);
|
||||
};
|
||||
|
||||
const handleEditFunction = (func: FunctionDefinition) => {
|
||||
setEditingFunction(func);
|
||||
setFunctionSidebarOpened(true);
|
||||
};
|
||||
|
||||
const handleExecuteFunction = (func: FunctionDefinition) => {
|
||||
setExecutingFunction(func);
|
||||
setExecutionSidebarOpened(true);
|
||||
};
|
||||
|
||||
const handleFormSuccess = () => {
|
||||
setRefreshKey(prev => prev + 1);
|
||||
};
|
||||
|
||||
const handleSidebarClose = () => {
|
||||
setFunctionSidebarOpened(false);
|
||||
setEditingFunction(null);
|
||||
};
|
||||
|
||||
const handleExecutionClose = () => {
|
||||
setExecutionSidebarOpened(false);
|
||||
setExecutingFunction(null);
|
||||
};
|
||||
|
||||
const toggleFavorite = () => {
|
||||
setIsFavorited(prev => !prev);
|
||||
};
|
||||
|
||||
const colorOptions = [
|
||||
{ value: 'red', label: 'Red' },
|
||||
{ value: 'blue', label: 'Blue' },
|
||||
{ value: 'green', label: 'Green' },
|
||||
{ value: 'purple', label: 'Purple' },
|
||||
{ value: 'orange', label: 'Orange' },
|
||||
{ value: 'pink', label: 'Pink' },
|
||||
{ value: 'teal', label: 'Teal' },
|
||||
];
|
||||
|
||||
const renderContent = () => {
|
||||
switch (currentRoute) {
|
||||
case 'functions':
|
||||
return (
|
||||
<FunctionList
|
||||
key={refreshKey}
|
||||
onCreateFunction={handleCreateFunction}
|
||||
onEditFunction={handleEditFunction}
|
||||
onExecuteFunction={handleExecuteFunction}
|
||||
/>
|
||||
);
|
||||
case 'executions':
|
||||
return <ExecutionList />;
|
||||
default:
|
||||
return (
|
||||
<FunctionList
|
||||
key={refreshKey}
|
||||
onCreateFunction={handleCreateFunction}
|
||||
onEditFunction={handleEditFunction}
|
||||
onExecuteFunction={handleExecuteFunction}
|
||||
/>
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
// Determine which sidebar is active
|
||||
const getActiveSidebar = () => {
|
||||
if (functionSidebarOpened) {
|
||||
return (
|
||||
<FunctionSidebar
|
||||
opened={functionSidebarOpened}
|
||||
onClose={handleSidebarClose}
|
||||
onSuccess={handleFormSuccess}
|
||||
editFunction={editingFunction}
|
||||
/>
|
||||
);
|
||||
}
|
||||
if (executionSidebarOpened) {
|
||||
return (
|
||||
<ExecutionSidebar
|
||||
opened={executionSidebarOpened}
|
||||
onClose={handleExecutionClose}
|
||||
function={executingFunction}
|
||||
/>
|
||||
);
|
||||
}
|
||||
return null;
|
||||
};
|
||||
|
||||
return (
|
||||
<SidebarLayout
|
||||
sidebarOpened={functionSidebarOpened || executionSidebarOpened}
|
||||
sidebarWidth={600}
|
||||
sidebar={getActiveSidebar()}
|
||||
>
|
||||
<Box
|
||||
style={{
|
||||
display: 'flex',
|
||||
flexDirection: 'column',
|
||||
gap: '1rem',
|
||||
}}
|
||||
>
|
||||
<div>
|
||||
<Group justify="space-between" align="flex-start">
|
||||
<div>
|
||||
<Group align="center" gap="sm" mb="xs">
|
||||
<Title order={1} size="h2">
|
||||
Function as a Service
|
||||
</Title>
|
||||
<ActionIcon
|
||||
variant="subtle"
|
||||
size="lg"
|
||||
onClick={toggleFavorite}
|
||||
aria-label={isFavorited ? "Remove from favorites" : "Add to favorites"}
|
||||
>
|
||||
{isFavorited ? (
|
||||
<IconStarFilled size={20} color="gold" />
|
||||
) : (
|
||||
<IconStar size={20} />
|
||||
)}
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
</div>
|
||||
|
||||
{/* Right-side controls */}
|
||||
<Group align="flex-start" gap="lg">
|
||||
<div>
|
||||
<Select
|
||||
placeholder="Choose a color"
|
||||
data={colorOptions}
|
||||
value={selectedColor}
|
||||
onChange={(value) => setSelectedColor(value || '')}
|
||||
size="sm"
|
||||
w={150}
|
||||
/>
|
||||
</div>
|
||||
</Group>
|
||||
</Group>
|
||||
</div>
|
||||
|
||||
<Tabs value={currentRoute} onChange={handleTabChange}>
|
||||
<Tabs.List>
|
||||
<Tabs.Tab
|
||||
value="functions"
|
||||
leftSection={<IconFunction size={16} />}
|
||||
>
|
||||
Functions
|
||||
</Tabs.Tab>
|
||||
<Tabs.Tab
|
||||
value="executions"
|
||||
leftSection={<IconPlayerPlay size={16} />}
|
||||
>
|
||||
Executions
|
||||
</Tabs.Tab>
|
||||
</Tabs.List>
|
||||
|
||||
<Box pt="md">
|
||||
{renderContent()}
|
||||
</Box>
|
||||
</Tabs>
|
||||
</Box>
|
||||
</SidebarLayout>
|
||||
);
|
||||
};
|
||||
|
||||
export default App;
|
||||
@ -1,211 +0,0 @@
|
||||
import React, { useState } from 'react';
|
||||
import { Box, Title, Tabs, Stack, ActionIcon, Group, Select } from '@mantine/core';
|
||||
import {
|
||||
IconFunction,
|
||||
IconPlay,
|
||||
IconDashboard,
|
||||
IconStar,
|
||||
IconStarFilled
|
||||
} from '@tabler/icons-react';
|
||||
import { FunctionList } from './components/FunctionList';
|
||||
import { FunctionForm } from './components/FunctionForm';
|
||||
import { ExecutionModal } from './components/ExecutionModal';
|
||||
import { FunctionDefinition } from './types';
|
||||
|
||||
const App: React.FC = () => {
|
||||
// Determine current route based on pathname
|
||||
const getCurrentRoute = () => {
|
||||
const path = window.location.pathname;
|
||||
if (path.includes('/functions')) return 'functions';
|
||||
if (path.includes('/executions')) return 'executions';
|
||||
return 'dashboard';
|
||||
};
|
||||
|
||||
const [currentRoute, setCurrentRoute] = useState(getCurrentRoute());
|
||||
const [isFavorited, setIsFavorited] = useState(false);
|
||||
const [selectedColor, setSelectedColor] = useState('');
|
||||
const [functionFormOpened, setFunctionFormOpened] = useState(false);
|
||||
const [executionModalOpened, setExecutionModalOpened] = useState(false);
|
||||
const [editingFunction, setEditingFunction] = useState<FunctionDefinition | null>(null);
|
||||
const [executingFunction, setExecutingFunction] = useState<FunctionDefinition | null>(null);
|
||||
const [refreshKey, setRefreshKey] = useState(0);
|
||||
|
||||
// Listen for URL changes (for when the shell navigates)
|
||||
React.useEffect(() => {
|
||||
const handlePopState = () => {
|
||||
setCurrentRoute(getCurrentRoute());
|
||||
};
|
||||
|
||||
window.addEventListener('popstate', handlePopState);
|
||||
return () => window.removeEventListener('popstate', handlePopState);
|
||||
}, []);
|
||||
|
||||
const handleTabChange = (value: string | null) => {
|
||||
if (value) {
|
||||
// Use history.pushState to update URL and notify shell router
|
||||
const basePath = '/app/faas';
|
||||
const newPath = value === 'dashboard' ? basePath : `${basePath}/${value}`;
|
||||
|
||||
// Update the URL and internal state
|
||||
window.history.pushState(null, '', newPath);
|
||||
setCurrentRoute(value);
|
||||
|
||||
// Dispatch a custom event so shell can respond if needed
|
||||
window.dispatchEvent(new PopStateEvent('popstate', { state: null }));
|
||||
}
|
||||
};
|
||||
|
||||
const handleCreateFunction = () => {
|
||||
setEditingFunction(null);
|
||||
setFunctionFormOpened(true);
|
||||
};
|
||||
|
||||
const handleEditFunction = (func: FunctionDefinition) => {
|
||||
setEditingFunction(func);
|
||||
setFunctionFormOpened(true);
|
||||
};
|
||||
|
||||
const handleExecuteFunction = (func: FunctionDefinition) => {
|
||||
setExecutingFunction(func);
|
||||
setExecutionModalOpened(true);
|
||||
};
|
||||
|
||||
const handleFormSuccess = () => {
|
||||
setRefreshKey(prev => prev + 1);
|
||||
};
|
||||
|
||||
const handleFormClose = () => {
|
||||
setFunctionFormOpened(false);
|
||||
setEditingFunction(null);
|
||||
};
|
||||
|
||||
const handleExecutionClose = () => {
|
||||
setExecutionModalOpened(false);
|
||||
setExecutingFunction(null);
|
||||
};
|
||||
|
||||
const toggleFavorite = () => {
|
||||
setIsFavorited(prev => !prev);
|
||||
};
|
||||
|
||||
const colorOptions = [
|
||||
{ value: 'red', label: 'Red' },
|
||||
{ value: 'blue', label: 'Blue' },
|
||||
{ value: 'green', label: 'Green' },
|
||||
{ value: 'purple', label: 'Purple' },
|
||||
{ value: 'orange', label: 'Orange' },
|
||||
{ value: 'pink', label: 'Pink' },
|
||||
{ value: 'teal', label: 'Teal' },
|
||||
];
|
||||
|
||||
const renderContent = () => {
|
||||
switch (currentRoute) {
|
||||
case 'functions':
|
||||
return (
|
||||
<FunctionList
|
||||
key={refreshKey}
|
||||
onCreateFunction={handleCreateFunction}
|
||||
onEditFunction={handleEditFunction}
|
||||
onExecuteFunction={handleExecuteFunction}
|
||||
/>
|
||||
);
|
||||
case 'executions':
|
||||
return <div>Executions view coming soon...</div>;
|
||||
default:
|
||||
return (
|
||||
<FunctionList
|
||||
key={refreshKey}
|
||||
onCreateFunction={handleCreateFunction}
|
||||
onEditFunction={handleEditFunction}
|
||||
onExecuteFunction={handleExecuteFunction}
|
||||
/>
|
||||
);
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<Box w="100%" pos="relative">
|
||||
<Stack gap="lg">
|
||||
<div>
|
||||
<Group justify="space-between" align="flex-start">
|
||||
<div>
|
||||
<Group align="center" gap="sm" mb="xs">
|
||||
<Title order={1} size="h2">
|
||||
Function as a Service
|
||||
</Title>
|
||||
<ActionIcon
|
||||
variant="subtle"
|
||||
size="lg"
|
||||
onClick={toggleFavorite}
|
||||
aria-label={isFavorited ? "Remove from favorites" : "Add to favorites"}
|
||||
>
|
||||
{isFavorited ? (
|
||||
<IconStarFilled size={20} color="gold" />
|
||||
) : (
|
||||
<IconStar size={20} />
|
||||
)}
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
</div>
|
||||
|
||||
{/* Right-side controls */}
|
||||
<Group align="flex-start" gap="lg">
|
||||
<div>
|
||||
<Select
|
||||
placeholder="Choose a color"
|
||||
data={colorOptions}
|
||||
value={selectedColor}
|
||||
onChange={(value) => setSelectedColor(value || '')}
|
||||
size="sm"
|
||||
w={150}
|
||||
/>
|
||||
</div>
|
||||
</Group>
|
||||
</Group>
|
||||
</div>
|
||||
|
||||
<Tabs value={currentRoute} onChange={handleTabChange}>
|
||||
<Tabs.List>
|
||||
<Tabs.Tab
|
||||
value="dashboard"
|
||||
leftSection={<IconDashboard size={16} />}
|
||||
>
|
||||
Dashboard
|
||||
</Tabs.Tab>
|
||||
<Tabs.Tab
|
||||
value="functions"
|
||||
leftSection={<IconFunction size={16} />}
|
||||
>
|
||||
Functions
|
||||
</Tabs.Tab>
|
||||
<Tabs.Tab
|
||||
value="executions"
|
||||
leftSection={<IconPlay size={16} />}
|
||||
>
|
||||
Executions
|
||||
</Tabs.Tab>
|
||||
</Tabs.List>
|
||||
|
||||
<Box pt="md">
|
||||
{renderContent()}
|
||||
</Box>
|
||||
</Tabs>
|
||||
</Stack>
|
||||
|
||||
<FunctionForm
|
||||
opened={functionFormOpened}
|
||||
onClose={handleFormClose}
|
||||
onSuccess={handleFormSuccess}
|
||||
editFunction={editingFunction}
|
||||
/>
|
||||
|
||||
<ExecutionModal
|
||||
opened={executionModalOpened}
|
||||
onClose={handleExecutionClose}
|
||||
function={executingFunction}
|
||||
/>
|
||||
</Box>
|
||||
);
|
||||
};
|
||||
|
||||
export default App;
|
||||
@ -1,409 +0,0 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import {
|
||||
Table,
|
||||
Button,
|
||||
Stack,
|
||||
Title,
|
||||
Modal,
|
||||
Select,
|
||||
TextInput,
|
||||
Pagination,
|
||||
Group,
|
||||
ActionIcon,
|
||||
Badge,
|
||||
Card,
|
||||
Text,
|
||||
Loader,
|
||||
Alert,
|
||||
Code,
|
||||
ScrollArea,
|
||||
Flex,
|
||||
} from '@mantine/core';
|
||||
import {
|
||||
IconRefresh,
|
||||
IconEye,
|
||||
IconX,
|
||||
IconSearch,
|
||||
IconClock,
|
||||
} from '@tabler/icons-react';
|
||||
import { executionApi, functionApi } from '../services/apiService';
|
||||
import { FunctionExecution, FunctionDefinition } from '../types';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
|
||||
const ExecutionList: React.FC = () => {
|
||||
const [executions, setExecutions] = useState<FunctionExecution[]>([]);
|
||||
const [functions, setFunctions] = useState<FunctionDefinition[]>([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
const [selectedFunction, setSelectedFunction] = useState<string>('');
|
||||
const [searchTerm, setSearchTerm] = useState('');
|
||||
const [page, setPage] = useState(1);
|
||||
const [totalPages, setTotalPages] = useState(1);
|
||||
const [selectedExecution, setSelectedExecution] = useState<FunctionExecution | null>(null);
|
||||
const [executionLogs, setExecutionLogs] = useState<string[]>([]);
|
||||
const [logsModalOpened, setLogsModalOpened] = useState(false);
|
||||
const [logsLoading, setLogsLoading] = useState(false);
|
||||
|
||||
const limit = 20;
|
||||
|
||||
const loadExecutions = async () => {
|
||||
try {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
|
||||
const offset = (page - 1) * limit;
|
||||
const functionId = selectedFunction || undefined;
|
||||
|
||||
const response = await executionApi.list(functionId, limit, offset);
|
||||
setExecutions(response.data.executions || []);
|
||||
|
||||
// Calculate total pages (rough estimate)
|
||||
const hasMore = response.data.executions?.length === limit;
|
||||
setTotalPages(hasMore ? page + 1 : page);
|
||||
|
||||
} catch (err: any) {
|
||||
setError(err.response?.data?.error || 'Failed to load executions');
|
||||
console.error('Error loading executions:', err);
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const loadFunctions = async () => {
|
||||
try {
|
||||
const response = await functionApi.list();
|
||||
setFunctions(response.data.functions || []);
|
||||
} catch (err) {
|
||||
console.error('Error loading functions:', err);
|
||||
}
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
loadFunctions();
|
||||
}, []);
|
||||
|
||||
useEffect(() => {
|
||||
loadExecutions();
|
||||
}, [page, selectedFunction]);
|
||||
|
||||
const handleRefresh = () => {
|
||||
loadExecutions();
|
||||
};
|
||||
|
||||
const handleViewLogs = async (execution: FunctionExecution) => {
|
||||
setSelectedExecution(execution);
|
||||
setLogsModalOpened(true);
|
||||
setLogsLoading(true);
|
||||
|
||||
try {
|
||||
const response = await executionApi.getLogs(execution.id);
|
||||
setExecutionLogs(response.data.logs || []);
|
||||
} catch (err: any) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: err.response?.data?.error || 'Failed to load logs',
|
||||
color: 'red',
|
||||
});
|
||||
setExecutionLogs([]);
|
||||
} finally {
|
||||
setLogsLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
const handleCancelExecution = async (executionId: string) => {
|
||||
try {
|
||||
await executionApi.cancel(executionId);
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: 'Execution cancelled successfully',
|
||||
color: 'green',
|
||||
});
|
||||
loadExecutions();
|
||||
} catch (err: any) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: err.response?.data?.error || 'Failed to cancel execution',
|
||||
color: 'red',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
const getStatusColor = (status: FunctionExecution['status']) => {
|
||||
switch (status) {
|
||||
case 'completed':
|
||||
return 'green';
|
||||
case 'failed':
|
||||
return 'red';
|
||||
case 'running':
|
||||
return 'blue';
|
||||
case 'pending':
|
||||
return 'yellow';
|
||||
case 'timeout':
|
||||
return 'orange';
|
||||
case 'canceled':
|
||||
return 'gray';
|
||||
default:
|
||||
return 'gray';
|
||||
}
|
||||
};
|
||||
|
||||
const formatDuration = (nanoseconds: number) => {
|
||||
if (!nanoseconds) return 'N/A';
|
||||
const milliseconds = nanoseconds / 1000000;
|
||||
if (milliseconds < 1000) {
|
||||
return `${milliseconds.toFixed(0)}ms`;
|
||||
}
|
||||
return `${(milliseconds / 1000).toFixed(2)}s`;
|
||||
};
|
||||
|
||||
const formatMemory = (bytes: number) => {
|
||||
if (!bytes) return 'N/A';
|
||||
if (bytes < 1024 * 1024) {
|
||||
return `${(bytes / 1024).toFixed(0)}KB`;
|
||||
}
|
||||
return `${(bytes / (1024 * 1024)).toFixed(1)}MB`;
|
||||
};
|
||||
|
||||
const formatDate = (dateString: string) => {
|
||||
return new Date(dateString).toLocaleString();
|
||||
};
|
||||
|
||||
const getFunctionName = (functionId: string) => {
|
||||
const func = functions.find(f => f.id === functionId);
|
||||
return func?.name || 'Unknown Function';
|
||||
};
|
||||
|
||||
const filteredExecutions = executions.filter(execution => {
|
||||
if (!searchTerm) return true;
|
||||
const functionName = getFunctionName(execution.function_id);
|
||||
return functionName.toLowerCase().includes(searchTerm.toLowerCase()) ||
|
||||
execution.id.toLowerCase().includes(searchTerm.toLowerCase()) ||
|
||||
execution.status.toLowerCase().includes(searchTerm.toLowerCase());
|
||||
});
|
||||
|
||||
if (loading && executions.length === 0) {
|
||||
return (
|
||||
<Stack gap="lg">
|
||||
<Group justify="space-between">
|
||||
<Title order={2}>Function Executions</Title>
|
||||
<Button
|
||||
leftSection={<IconRefresh size={16} />}
|
||||
onClick={handleRefresh}
|
||||
loading={loading}
|
||||
>
|
||||
Refresh
|
||||
</Button>
|
||||
</Group>
|
||||
|
||||
<Stack align="center" justify="center" h={200}>
|
||||
<Loader size="lg" />
|
||||
<Text>Loading executions...</Text>
|
||||
</Stack>
|
||||
</Stack>
|
||||
);
|
||||
}
|
||||
|
||||
return (
|
||||
<Stack gap="lg">
|
||||
<Group justify="space-between">
|
||||
<Title order={2}>Function Executions</Title>
|
||||
<Button
|
||||
leftSection={<IconRefresh size={16} />}
|
||||
onClick={handleRefresh}
|
||||
loading={loading}
|
||||
>
|
||||
Refresh
|
||||
</Button>
|
||||
</Group>
|
||||
|
||||
<Group>
|
||||
<TextInput
|
||||
placeholder="Search executions..."
|
||||
value={searchTerm}
|
||||
onChange={(event) => setSearchTerm(event.currentTarget.value)}
|
||||
leftSection={<IconSearch size={16} />}
|
||||
style={{ width: 300 }}
|
||||
/>
|
||||
<Select
|
||||
placeholder="All Functions"
|
||||
data={functions.map(f => ({ value: f.id, label: f.name }))}
|
||||
value={selectedFunction}
|
||||
onChange={(value) => {
|
||||
setSelectedFunction(value || '');
|
||||
setPage(1);
|
||||
}}
|
||||
clearable
|
||||
style={{ width: 200 }}
|
||||
/>
|
||||
</Group>
|
||||
|
||||
{error && (
|
||||
<Alert color="red" title="Error">
|
||||
{error}
|
||||
</Alert>
|
||||
)}
|
||||
|
||||
{filteredExecutions.length === 0 ? (
|
||||
<Card shadow="sm" radius="md" withBorder p="xl">
|
||||
<Stack align="center" gap="md">
|
||||
<IconClock size={48} color="gray" />
|
||||
<div style={{ textAlign: 'center' }}>
|
||||
<Text fw={500} mb="xs">
|
||||
No executions found
|
||||
</Text>
|
||||
<Text size="sm" c="dimmed">
|
||||
There are no function executions matching your current filters
|
||||
</Text>
|
||||
</div>
|
||||
</Stack>
|
||||
</Card>
|
||||
) : (
|
||||
<Card shadow="sm" radius="md" withBorder>
|
||||
<Table>
|
||||
<Table.Thead>
|
||||
<Table.Tr>
|
||||
<Table.Th>Function</Table.Th>
|
||||
<Table.Th>Status</Table.Th>
|
||||
<Table.Th>Duration</Table.Th>
|
||||
<Table.Th>Memory</Table.Th>
|
||||
<Table.Th>Started</Table.Th>
|
||||
<Table.Th>Actions</Table.Th>
|
||||
</Table.Tr>
|
||||
</Table.Thead>
|
||||
<Table.Tbody>
|
||||
{filteredExecutions.map((execution) => (
|
||||
<Table.Tr key={execution.id}>
|
||||
<Table.Td>
|
||||
<Stack gap={2}>
|
||||
<Text fw={500}>{getFunctionName(execution.function_id)}</Text>
|
||||
<Code style={{ fontSize: '12px' }}>{execution.id.slice(0, 8)}...</Code>
|
||||
</Stack>
|
||||
</Table.Td>
|
||||
<Table.Td>
|
||||
<Badge color={getStatusColor(execution.status)} variant="light">
|
||||
{execution.status}
|
||||
</Badge>
|
||||
</Table.Td>
|
||||
<Table.Td>
|
||||
<Group gap="xs">
|
||||
<IconClock size={14} />
|
||||
<Text size="sm">{formatDuration(execution.duration)}</Text>
|
||||
</Group>
|
||||
</Table.Td>
|
||||
<Table.Td>
|
||||
<Group gap="xs">
|
||||
{/* <IconMemory size={14} /> */}
|
||||
<Text size="sm">{formatMemory(execution.memory_used)}</Text>
|
||||
</Group>
|
||||
</Table.Td>
|
||||
<Table.Td>
|
||||
<Text size="sm">{formatDate(execution.created_at)}</Text>
|
||||
</Table.Td>
|
||||
<Table.Td>
|
||||
<Group gap="xs">
|
||||
<ActionIcon
|
||||
size="sm"
|
||||
variant="subtle"
|
||||
onClick={() => handleViewLogs(execution)}
|
||||
title="View Logs"
|
||||
>
|
||||
<IconEye size={16} />
|
||||
</ActionIcon>
|
||||
{(execution.status === 'running' || execution.status === 'pending') && (
|
||||
<ActionIcon
|
||||
size="sm"
|
||||
variant="subtle"
|
||||
color="red"
|
||||
onClick={() => handleCancelExecution(execution.id)}
|
||||
title="Cancel Execution"
|
||||
>
|
||||
<IconX size={16} />
|
||||
</ActionIcon>
|
||||
)}
|
||||
</Group>
|
||||
</Table.Td>
|
||||
</Table.Tr>
|
||||
))}
|
||||
</Table.Tbody>
|
||||
</Table>
|
||||
</Card>
|
||||
)}
|
||||
|
||||
{totalPages > 1 && (
|
||||
<Group justify="center">
|
||||
<Pagination
|
||||
value={page}
|
||||
onChange={setPage}
|
||||
total={totalPages}
|
||||
size="sm"
|
||||
/>
|
||||
</Group>
|
||||
)}
|
||||
|
||||
{/* Logs Modal */}
|
||||
<Modal
|
||||
opened={logsModalOpened}
|
||||
onClose={() => setLogsModalOpened(false)}
|
||||
title={`Execution Logs - ${selectedExecution?.id.slice(0, 8)}...`}
|
||||
size="xl"
|
||||
>
|
||||
<Stack gap="md">
|
||||
{selectedExecution && (
|
||||
<Card>
|
||||
<Group justify="space-between">
|
||||
<Stack gap="xs">
|
||||
<Text size="sm"><strong>Function:</strong> {getFunctionName(selectedExecution.function_id)}</Text>
|
||||
<Text size="sm"><strong>Status:</strong> <Badge color={getStatusColor(selectedExecution.status)}>{selectedExecution.status}</Badge></Text>
|
||||
</Stack>
|
||||
<Stack gap="xs" align="flex-end">
|
||||
<Text size="sm"><strong>Duration:</strong> {formatDuration(selectedExecution.duration)}</Text>
|
||||
<Text size="sm"><strong>Memory:</strong> {formatMemory(selectedExecution.memory_used)}</Text>
|
||||
</Stack>
|
||||
</Group>
|
||||
|
||||
{selectedExecution.input && (
|
||||
<div>
|
||||
<Text size="sm" fw={500} mt="md" mb="xs">Input:</Text>
|
||||
<Code block>{JSON.stringify(selectedExecution.input, null, 2)}</Code>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{selectedExecution.output && (
|
||||
<div>
|
||||
<Text size="sm" fw={500} mt="md" mb="xs">Output:</Text>
|
||||
<Code block>{JSON.stringify(selectedExecution.output, null, 2)}</Code>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{selectedExecution.error && (
|
||||
<div>
|
||||
<Text size="sm" fw={500} mt="md" mb="xs">Error:</Text>
|
||||
<Code block c="red">{selectedExecution.error}</Code>
|
||||
</div>
|
||||
)}
|
||||
</Card>
|
||||
)}
|
||||
|
||||
<div>
|
||||
<Text size="sm" fw={500} mb="xs">Container Logs:</Text>
|
||||
{logsLoading ? (
|
||||
<Flex justify="center" p="md">
|
||||
<Loader size="sm" />
|
||||
</Flex>
|
||||
) : executionLogs.length === 0 ? (
|
||||
<Text c="dimmed" ta="center" p="md">No logs available</Text>
|
||||
) : (
|
||||
<ScrollArea h={300}>
|
||||
<Code block>
|
||||
{executionLogs.join('\n')}
|
||||
</Code>
|
||||
</ScrollArea>
|
||||
)}
|
||||
</div>
|
||||
</Stack>
|
||||
</Modal>
|
||||
</Stack>
|
||||
);
|
||||
};
|
||||
|
||||
export default ExecutionList;
|
||||
@ -1,413 +0,0 @@
|
||||
import React, { useState, useEffect, useRef } from 'react';
|
||||
import {
|
||||
Modal,
|
||||
Button,
|
||||
Group,
|
||||
Stack,
|
||||
Text,
|
||||
Textarea,
|
||||
Switch,
|
||||
Alert,
|
||||
Badge,
|
||||
Divider,
|
||||
Paper,
|
||||
JsonInput,
|
||||
Loader,
|
||||
ActionIcon,
|
||||
Tooltip,
|
||||
} from '@mantine/core';
|
||||
import { IconPlayerPlay, IconPlayerStop, IconRefresh, IconCopy } from '@tabler/icons-react';
|
||||
import { notifications, ExecutionStatusBadge } from '@skybridge/web-components';
|
||||
import { functionApi, executionApi } from '../services/apiService';
|
||||
import { FunctionDefinition, ExecuteFunctionResponse, FunctionExecution } from '../types';
|
||||
|
||||
interface ExecutionModalProps {
|
||||
opened: boolean;
|
||||
onClose: () => void;
|
||||
function: FunctionDefinition | null;
|
||||
}
|
||||
|
||||
export const ExecutionModal: React.FC<ExecutionModalProps> = ({
|
||||
opened,
|
||||
onClose,
|
||||
function: func,
|
||||
}) => {
|
||||
const [input, setInput] = useState('{}');
|
||||
const [async, setAsync] = useState(false);
|
||||
const [executing, setExecuting] = useState(false);
|
||||
const [result, setResult] = useState<ExecuteFunctionResponse | null>(null);
|
||||
const [execution, setExecution] = useState<FunctionExecution | null>(null);
|
||||
const [logs, setLogs] = useState<string[]>([]);
|
||||
const [loadingLogs, setLoadingLogs] = useState(false);
|
||||
const [autoRefreshLogs, setAutoRefreshLogs] = useState(false);
|
||||
const pollIntervalRef = useRef<NodeJS.Timeout | null>(null);
|
||||
const logsPollIntervalRef = useRef<NodeJS.Timeout | null>(null);
|
||||
|
||||
const stopLogsAutoRefresh = () => {
|
||||
if (logsPollIntervalRef.current) {
|
||||
clearInterval(logsPollIntervalRef.current);
|
||||
logsPollIntervalRef.current = null;
|
||||
}
|
||||
setAutoRefreshLogs(false);
|
||||
};
|
||||
|
||||
// Cleanup intervals on unmount or when modal closes
|
||||
useEffect(() => {
|
||||
if (!opened) {
|
||||
// Stop auto-refresh when modal closes
|
||||
stopLogsAutoRefresh();
|
||||
if (pollIntervalRef.current) {
|
||||
clearTimeout(pollIntervalRef.current);
|
||||
}
|
||||
}
|
||||
}, [opened]);
|
||||
|
||||
// Cleanup intervals on unmount
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (pollIntervalRef.current) {
|
||||
clearTimeout(pollIntervalRef.current);
|
||||
}
|
||||
if (logsPollIntervalRef.current) {
|
||||
clearInterval(logsPollIntervalRef.current);
|
||||
}
|
||||
};
|
||||
}, []);
|
||||
|
||||
if (!func) return null;
|
||||
|
||||
const handleExecute = async () => {
|
||||
try {
|
||||
setExecuting(true);
|
||||
setResult(null);
|
||||
setExecution(null);
|
||||
setLogs([]);
|
||||
|
||||
let parsedInput;
|
||||
try {
|
||||
parsedInput = input.trim() ? JSON.parse(input) : undefined;
|
||||
} catch (e) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Invalid JSON input',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const response = await functionApi.execute(func.id, {
|
||||
input: parsedInput,
|
||||
async,
|
||||
});
|
||||
|
||||
setResult(response.data);
|
||||
|
||||
if (async) {
|
||||
// Poll for execution status and start auto-refreshing logs
|
||||
pollExecution(response.data.execution_id);
|
||||
} else {
|
||||
// For synchronous executions, load logs immediately
|
||||
if (response.data.execution_id) {
|
||||
loadLogs(response.data.execution_id);
|
||||
}
|
||||
}
|
||||
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: `Function ${async ? 'invoked' : 'executed'} successfully`,
|
||||
color: 'green',
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Execution error:', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Failed to execute function',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setExecuting(false);
|
||||
}
|
||||
};
|
||||
|
||||
const pollExecution = async (executionId: string) => {
|
||||
// Start auto-refreshing logs immediately for async executions
|
||||
startLogsAutoRefresh(executionId);
|
||||
|
||||
const poll = async () => {
|
||||
try {
|
||||
const response = await executionApi.getById(executionId);
|
||||
setExecution(response.data);
|
||||
|
||||
if (response.data.status === 'running' || response.data.status === 'pending') {
|
||||
pollIntervalRef.current = setTimeout(poll, 2000); // Poll every 2 seconds
|
||||
} else {
|
||||
// Execution completed, stop auto-refresh and load final logs
|
||||
stopLogsAutoRefresh();
|
||||
loadLogs(executionId);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error polling execution:', error);
|
||||
stopLogsAutoRefresh();
|
||||
}
|
||||
};
|
||||
|
||||
poll();
|
||||
};
|
||||
|
||||
const loadLogs = async (executionId: string) => {
|
||||
try {
|
||||
console.debug(`[ExecutionModal] Loading logs for execution ${executionId}`);
|
||||
setLoadingLogs(true);
|
||||
const response = await executionApi.getLogs(executionId);
|
||||
console.debug(`[ExecutionModal] Loaded logs for execution ${executionId}:`, {
|
||||
logCount: response.data.logs?.length || 0,
|
||||
logs: response.data.logs
|
||||
});
|
||||
setLogs(response.data.logs || []);
|
||||
} catch (error) {
|
||||
console.error(`[ExecutionModal] Error loading logs for execution ${executionId}:`, error);
|
||||
} finally {
|
||||
setLoadingLogs(false);
|
||||
}
|
||||
};
|
||||
|
||||
const startLogsAutoRefresh = (executionId: string) => {
|
||||
console.debug(`[ExecutionModal] Starting auto-refresh for execution ${executionId}`);
|
||||
|
||||
// Clear any existing interval
|
||||
if (logsPollIntervalRef.current) {
|
||||
clearInterval(logsPollIntervalRef.current);
|
||||
}
|
||||
|
||||
setAutoRefreshLogs(true);
|
||||
|
||||
// Load logs immediately
|
||||
loadLogs(executionId);
|
||||
|
||||
// Set up auto-refresh every 2 seconds
|
||||
logsPollIntervalRef.current = setInterval(async () => {
|
||||
try {
|
||||
console.debug(`[ExecutionModal] Auto-refreshing logs for execution ${executionId}`);
|
||||
const response = await executionApi.getLogs(executionId);
|
||||
console.debug(`[ExecutionModal] Auto-refresh got logs for execution ${executionId}:`, {
|
||||
logCount: response.data.logs?.length || 0,
|
||||
logs: response.data.logs
|
||||
});
|
||||
setLogs(response.data.logs || []);
|
||||
} catch (error) {
|
||||
console.error(`[ExecutionModal] Error auto-refreshing logs for execution ${executionId}:`, error);
|
||||
}
|
||||
}, 2000);
|
||||
};
|
||||
|
||||
const handleCancel = async () => {
|
||||
if (result && async) {
|
||||
try {
|
||||
await executionApi.cancel(result.execution_id);
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: 'Execution canceled',
|
||||
color: 'orange',
|
||||
});
|
||||
// Refresh execution status
|
||||
if (result.execution_id) {
|
||||
const response = await executionApi.getById(result.execution_id);
|
||||
setExecution(response.data);
|
||||
}
|
||||
} catch (error) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Failed to cancel execution',
|
||||
color: 'red',
|
||||
});
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
const copyToClipboard = (text: string) => {
|
||||
navigator.clipboard.writeText(text);
|
||||
notifications.show({
|
||||
title: 'Copied',
|
||||
message: 'Copied to clipboard',
|
||||
color: 'green',
|
||||
});
|
||||
};
|
||||
|
||||
return (
|
||||
<Modal
|
||||
opened={opened}
|
||||
onClose={onClose}
|
||||
title={`Execute Function: ${func.name}`}
|
||||
size="xl"
|
||||
>
|
||||
<Stack gap="md">
|
||||
{/* Function Info */}
|
||||
<Paper withBorder p="sm" bg="gray.0">
|
||||
<Group justify="space-between">
|
||||
<div>
|
||||
<Text size="sm" fw={500}>{func.name}</Text>
|
||||
<Text size="xs" c="dimmed">{func.runtime} • {func.memory}MB • {func.timeout}</Text>
|
||||
</div>
|
||||
<Badge variant="light">
|
||||
{func.runtime}
|
||||
</Badge>
|
||||
</Group>
|
||||
</Paper>
|
||||
|
||||
{/* Input */}
|
||||
<JsonInput
|
||||
label="Function Input (JSON)"
|
||||
placeholder='{"key": "value"}'
|
||||
value={input}
|
||||
onChange={setInput}
|
||||
minRows={4}
|
||||
maxRows={8}
|
||||
validationError="Invalid JSON"
|
||||
/>
|
||||
|
||||
{/* Execution Options */}
|
||||
<Group justify="space-between">
|
||||
<Switch
|
||||
label="Asynchronous execution"
|
||||
description="Execute in background"
|
||||
checked={async}
|
||||
onChange={(event) => setAsync(event.currentTarget.checked)}
|
||||
/>
|
||||
<Group>
|
||||
<Button
|
||||
leftSection={<IconPlayerPlay size={16} />}
|
||||
onClick={handleExecute}
|
||||
loading={executing}
|
||||
disabled={executing}
|
||||
>
|
||||
{async ? 'Invoke' : 'Execute'}
|
||||
</Button>
|
||||
{result && async && execution?.status === 'running' && (
|
||||
<Button
|
||||
leftSection={<IconPlayerStop size={16} />}
|
||||
color="orange"
|
||||
variant="light"
|
||||
onClick={handleCancel}
|
||||
>
|
||||
Cancel
|
||||
</Button>
|
||||
)}
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
{/* Results */}
|
||||
{result && (
|
||||
<>
|
||||
<Divider label="Execution Result" labelPosition="center" />
|
||||
|
||||
<Paper withBorder p="md">
|
||||
<Group justify="space-between" mb="sm">
|
||||
<Text fw={500}>Execution #{result.execution_id.slice(0, 8)}...</Text>
|
||||
<Group gap="xs">
|
||||
<ExecutionStatusBadge value={execution?.status || result.status} />
|
||||
{result.duration && (
|
||||
<Badge variant="light">
|
||||
{result.duration}ms
|
||||
</Badge>
|
||||
)}
|
||||
{result.memory_used && (
|
||||
<Badge variant="light">
|
||||
{result.memory_used}MB
|
||||
</Badge>
|
||||
)}
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
{/* Output */}
|
||||
{(result.output || execution?.output) && (
|
||||
<div>
|
||||
<Group justify="space-between" mb="xs">
|
||||
<Text size="sm" fw={500}>Output:</Text>
|
||||
<Tooltip label="Copy output">
|
||||
<ActionIcon
|
||||
variant="light"
|
||||
size="sm"
|
||||
onClick={() => copyToClipboard(JSON.stringify(result.output || execution?.output, null, 2))}
|
||||
>
|
||||
<IconCopy size={14} />
|
||||
</ActionIcon>
|
||||
</Tooltip>
|
||||
</Group>
|
||||
<Paper bg="gray.1" p="sm">
|
||||
<Text size="sm" component="pre" style={{ whiteSpace: 'pre-wrap' }}>
|
||||
{JSON.stringify(result.output || execution?.output, null, 2)}
|
||||
</Text>
|
||||
</Paper>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Error */}
|
||||
{(result.error || execution?.error) && (
|
||||
<Alert color="red" mt="sm">
|
||||
<Text size="sm">{result.error || execution?.error}</Text>
|
||||
</Alert>
|
||||
)}
|
||||
|
||||
{/* Logs */}
|
||||
<div style={{ marginTop: '1rem' }}>
|
||||
<Group justify="space-between" mb="xs">
|
||||
<Group gap="xs">
|
||||
<Text size="sm" fw={500}>Logs:</Text>
|
||||
{autoRefreshLogs && (
|
||||
<Badge size="xs" color="blue" variant="light">
|
||||
Auto-refreshing
|
||||
</Badge>
|
||||
)}
|
||||
</Group>
|
||||
<Group gap="xs">
|
||||
{result.execution_id && (
|
||||
<Button
|
||||
size="xs"
|
||||
variant={autoRefreshLogs ? "filled" : "light"}
|
||||
color={autoRefreshLogs ? "red" : "blue"}
|
||||
leftSection={<IconRefresh size={12} />}
|
||||
onClick={() => {
|
||||
if (autoRefreshLogs) {
|
||||
stopLogsAutoRefresh();
|
||||
} else {
|
||||
startLogsAutoRefresh(result.execution_id);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{autoRefreshLogs ? 'Stop Auto-refresh' : 'Auto-refresh'}
|
||||
</Button>
|
||||
)}
|
||||
<Button
|
||||
size="xs"
|
||||
variant="light"
|
||||
leftSection={<IconRefresh size={12} />}
|
||||
onClick={() => result.execution_id && loadLogs(result.execution_id)}
|
||||
loading={loadingLogs}
|
||||
disabled={autoRefreshLogs}
|
||||
>
|
||||
Manual Refresh
|
||||
</Button>
|
||||
</Group>
|
||||
</Group>
|
||||
<Paper bg="gray.9" p="sm" mah={200} style={{ overflow: 'auto' }}>
|
||||
{loadingLogs ? (
|
||||
<Group justify="center">
|
||||
<Loader size="sm" />
|
||||
</Group>
|
||||
) : (logs.length > 0 || (execution?.logs && execution.logs.length > 0)) ? (
|
||||
<Text size="xs" c="white" component="pre">
|
||||
{(execution?.logs || logs).join('\n')}
|
||||
</Text>
|
||||
) : (
|
||||
<Text size="xs" c="gray.5">No logs available</Text>
|
||||
)}
|
||||
</Paper>
|
||||
</div>
|
||||
</Paper>
|
||||
</>
|
||||
)}
|
||||
</Stack>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
@ -1,453 +0,0 @@
|
||||
import React, { useState, useEffect, useRef } from 'react';
|
||||
import {
|
||||
Paper,
|
||||
Button,
|
||||
Group,
|
||||
Stack,
|
||||
Text,
|
||||
Textarea,
|
||||
Switch,
|
||||
Alert,
|
||||
Badge,
|
||||
Divider,
|
||||
JsonInput,
|
||||
Loader,
|
||||
ActionIcon,
|
||||
Tooltip,
|
||||
Title,
|
||||
ScrollArea,
|
||||
Box,
|
||||
} from '@mantine/core';
|
||||
import { IconPlayerPlay, IconPlayerStop, IconRefresh, IconCopy, IconX } from '@tabler/icons-react';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import { functionApi, executionApi } from '../services/apiService';
|
||||
import { FunctionDefinition, ExecuteFunctionResponse, FunctionExecution } from '../types';
|
||||
|
||||
interface ExecutionSidebarProps {
|
||||
opened: boolean;
|
||||
onClose: () => void;
|
||||
function: FunctionDefinition | null;
|
||||
}
|
||||
|
||||
export const ExecutionSidebar: React.FC<ExecutionSidebarProps> = ({
|
||||
opened,
|
||||
onClose,
|
||||
function: func,
|
||||
}) => {
|
||||
const [input, setInput] = useState('{}');
|
||||
const [async, setAsync] = useState(false);
|
||||
const [executing, setExecuting] = useState(false);
|
||||
const [result, setResult] = useState<ExecuteFunctionResponse | null>(null);
|
||||
const [execution, setExecution] = useState<FunctionExecution | null>(null);
|
||||
const [logs, setLogs] = useState<string[]>([]);
|
||||
const [loadingLogs, setLoadingLogs] = useState(false);
|
||||
const [autoRefreshLogs, setAutoRefreshLogs] = useState(false);
|
||||
const pollIntervalRef = useRef<NodeJS.Timeout | null>(null);
|
||||
const logsPollIntervalRef = useRef<NodeJS.Timeout | null>(null);
|
||||
|
||||
const stopLogsAutoRefresh = () => {
|
||||
if (logsPollIntervalRef.current) {
|
||||
clearInterval(logsPollIntervalRef.current);
|
||||
logsPollIntervalRef.current = null;
|
||||
}
|
||||
setAutoRefreshLogs(false);
|
||||
};
|
||||
|
||||
// Cleanup intervals on unmount or when sidebar closes
|
||||
useEffect(() => {
|
||||
if (!opened) {
|
||||
// Stop auto-refresh when sidebar closes
|
||||
stopLogsAutoRefresh();
|
||||
if (pollIntervalRef.current) {
|
||||
clearTimeout(pollIntervalRef.current);
|
||||
}
|
||||
}
|
||||
}, [opened]);
|
||||
|
||||
// Cleanup intervals on unmount
|
||||
useEffect(() => {
|
||||
return () => {
|
||||
if (pollIntervalRef.current) {
|
||||
clearTimeout(pollIntervalRef.current);
|
||||
}
|
||||
if (logsPollIntervalRef.current) {
|
||||
clearInterval(logsPollIntervalRef.current);
|
||||
}
|
||||
};
|
||||
}, []);
|
||||
|
||||
if (!func) return null;
|
||||
|
||||
const handleExecute = async () => {
|
||||
try {
|
||||
setExecuting(true);
|
||||
setResult(null);
|
||||
setExecution(null);
|
||||
setLogs([]);
|
||||
|
||||
let parsedInput;
|
||||
try {
|
||||
parsedInput = input.trim() ? JSON.parse(input) : undefined;
|
||||
} catch (e) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Invalid JSON input',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
const response = await functionApi.execute(func.id, {
|
||||
input: parsedInput,
|
||||
async,
|
||||
});
|
||||
|
||||
setResult(response.data);
|
||||
|
||||
if (async) {
|
||||
// Poll for execution status and start auto-refreshing logs
|
||||
pollExecution(response.data.execution_id);
|
||||
} else {
|
||||
// For synchronous executions, load logs immediately
|
||||
if (response.data.execution_id) {
|
||||
loadLogs(response.data.execution_id);
|
||||
}
|
||||
}
|
||||
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: `Function ${async ? 'invoked' : 'executed'} successfully`,
|
||||
color: 'green',
|
||||
});
|
||||
} catch (error) {
|
||||
console.error('Execution error:', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Failed to execute function',
|
||||
color: 'red',
|
||||
});
|
||||
} finally {
|
||||
setExecuting(false);
|
||||
}
|
||||
};
|
||||
|
||||
const pollExecution = async (executionId: string) => {
|
||||
// Start auto-refreshing logs immediately for async executions
|
||||
startLogsAutoRefresh(executionId);
|
||||
|
||||
const poll = async () => {
|
||||
try {
|
||||
const response = await executionApi.getById(executionId);
|
||||
setExecution(response.data);
|
||||
|
||||
if (response.data.status === 'running' || response.data.status === 'pending') {
|
||||
pollIntervalRef.current = setTimeout(poll, 2000); // Poll every 2 seconds
|
||||
} else {
|
||||
// Execution completed, stop auto-refresh and load final logs
|
||||
stopLogsAutoRefresh();
|
||||
loadLogs(executionId);
|
||||
}
|
||||
} catch (error) {
|
||||
console.error('Error polling execution:', error);
|
||||
stopLogsAutoRefresh();
|
||||
}
|
||||
};
|
||||
|
||||
poll();
|
||||
};
|
||||
|
||||
const loadLogs = async (executionId: string) => {
|
||||
try {
|
||||
console.debug(`[ExecutionSidebar] Loading logs for execution ${executionId}`);
|
||||
setLoadingLogs(true);
|
||||
const response = await executionApi.getLogs(executionId);
|
||||
console.debug(`[ExecutionSidebar] Loaded logs for execution ${executionId}:`, {
|
||||
logCount: response.data.logs?.length || 0,
|
||||
logs: response.data.logs
|
||||
});
|
||||
setLogs(response.data.logs || []);
|
||||
} catch (error) {
|
||||
console.error(`[ExecutionSidebar] Error loading logs for execution ${executionId}:`, error);
|
||||
} finally {
|
||||
setLoadingLogs(false);
|
||||
}
|
||||
};
|
||||
|
||||
const startLogsAutoRefresh = (executionId: string) => {
|
||||
console.debug(`[ExecutionSidebar] Starting auto-refresh for execution ${executionId}`);
|
||||
|
||||
// Clear any existing interval
|
||||
if (logsPollIntervalRef.current) {
|
||||
clearInterval(logsPollIntervalRef.current);
|
||||
}
|
||||
|
||||
setAutoRefreshLogs(true);
|
||||
|
||||
// Load logs immediately
|
||||
loadLogs(executionId);
|
||||
|
||||
// Set up auto-refresh every 2 seconds
|
||||
logsPollIntervalRef.current = setInterval(async () => {
|
||||
try {
|
||||
console.debug(`[ExecutionSidebar] Auto-refreshing logs for execution ${executionId}`);
|
||||
const response = await executionApi.getLogs(executionId);
|
||||
console.debug(`[ExecutionSidebar] Auto-refresh got logs for execution ${executionId}:`, {
|
||||
logCount: response.data.logs?.length || 0,
|
||||
logs: response.data.logs
|
||||
});
|
||||
setLogs(response.data.logs || []);
|
||||
} catch (error) {
|
||||
console.error(`[ExecutionSidebar] Error auto-refreshing logs for execution ${executionId}:`, error);
|
||||
}
|
||||
}, 2000);
|
||||
};
|
||||
|
||||
const handleCancel = async () => {
|
||||
if (result && async) {
|
||||
try {
|
||||
await executionApi.cancel(result.execution_id);
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: 'Execution canceled',
|
||||
color: 'orange',
|
||||
});
|
||||
// Refresh execution status
|
||||
if (result.execution_id) {
|
||||
const response = await executionApi.getById(result.execution_id);
|
||||
setExecution(response.data);
|
||||
}
|
||||
} catch (error) {
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Failed to cancel execution',
|
||||
color: 'red',
|
||||
});
|
||||
}
|
||||
}
|
||||
};
|
||||
|
||||
const getStatusColor = (status: string) => {
|
||||
switch (status) {
|
||||
case 'completed': return 'green';
|
||||
case 'failed': return 'red';
|
||||
case 'running': return 'blue';
|
||||
case 'pending': return 'yellow';
|
||||
case 'canceled': return 'orange';
|
||||
case 'timeout': return 'red';
|
||||
default: return 'gray';
|
||||
}
|
||||
};
|
||||
|
||||
const copyToClipboard = (text: string) => {
|
||||
navigator.clipboard.writeText(text);
|
||||
notifications.show({
|
||||
title: 'Copied',
|
||||
message: 'Copied to clipboard',
|
||||
color: 'green',
|
||||
});
|
||||
};
|
||||
|
||||
if (!opened) return null;
|
||||
|
||||
return (
|
||||
<Paper
|
||||
style={{
|
||||
height: '100%',
|
||||
borderRadius: 0,
|
||||
display: 'flex',
|
||||
flexDirection: 'column',
|
||||
borderLeft: '1px solid var(--mantine-color-gray-3)',
|
||||
backgroundColor: 'var(--mantine-color-body)',
|
||||
}}
|
||||
>
|
||||
{/* Header */}
|
||||
<Group justify="space-between" p="md" style={{ borderBottom: '1px solid var(--mantine-color-gray-3)' }}>
|
||||
<Title order={4}>
|
||||
Execute Function: {func.name}
|
||||
</Title>
|
||||
<ActionIcon
|
||||
variant="subtle"
|
||||
color="gray"
|
||||
onClick={onClose}
|
||||
>
|
||||
<IconX size={18} />
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
|
||||
{/* Content */}
|
||||
<ScrollArea style={{ flex: 1 }}>
|
||||
<Box p="md">
|
||||
<Stack gap="md">
|
||||
{/* Function Info */}
|
||||
<Paper withBorder p="sm" bg="gray.0">
|
||||
<Group justify="space-between">
|
||||
<div>
|
||||
<Text size="sm" fw={500}>{func.name}</Text>
|
||||
<Text size="xs" c="dimmed">{func.runtime} • {func.memory}MB • {func.timeout}</Text>
|
||||
</div>
|
||||
<Badge variant="light">
|
||||
{func.runtime}
|
||||
</Badge>
|
||||
</Group>
|
||||
</Paper>
|
||||
|
||||
{/* Input */}
|
||||
<JsonInput
|
||||
label="Function Input (JSON)"
|
||||
placeholder='{"key": "value"}'
|
||||
value={input}
|
||||
onChange={setInput}
|
||||
minRows={4}
|
||||
maxRows={8}
|
||||
validationError="Invalid JSON"
|
||||
/>
|
||||
|
||||
{/* Execution Options */}
|
||||
<Group justify="space-between">
|
||||
<Switch
|
||||
label="Asynchronous execution"
|
||||
description="Execute in background"
|
||||
checked={async}
|
||||
onChange={(event) => setAsync(event.currentTarget.checked)}
|
||||
/>
|
||||
<Group>
|
||||
<Button
|
||||
leftSection={<IconPlayerPlay size={16} />}
|
||||
onClick={handleExecute}
|
||||
loading={executing}
|
||||
disabled={executing}
|
||||
>
|
||||
{async ? 'Invoke' : 'Execute'}
|
||||
</Button>
|
||||
{result && async && execution?.status === 'running' && (
|
||||
<Button
|
||||
leftSection={<IconPlayerStop size={16} />}
|
||||
color="orange"
|
||||
variant="light"
|
||||
onClick={handleCancel}
|
||||
>
|
||||
Cancel
|
||||
</Button>
|
||||
)}
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
{/* Results */}
|
||||
{result && (
|
||||
<>
|
||||
<Divider label="Execution Result" labelPosition="center" />
|
||||
|
||||
<Paper withBorder p="md">
|
||||
<Group justify="space-between" mb="sm">
|
||||
<Text fw={500}>Execution #{result.execution_id.slice(0, 8)}...</Text>
|
||||
<Group gap="xs">
|
||||
<Badge color={getStatusColor(execution?.status || result.status)}>
|
||||
{execution?.status || result.status}
|
||||
</Badge>
|
||||
{result.duration && (
|
||||
<Badge variant="light">
|
||||
{result.duration}ms
|
||||
</Badge>
|
||||
)}
|
||||
{result.memory_used && (
|
||||
<Badge variant="light">
|
||||
{result.memory_used}MB
|
||||
</Badge>
|
||||
)}
|
||||
</Group>
|
||||
</Group>
|
||||
|
||||
{/* Output */}
|
||||
{(result.output || execution?.output) && (
|
||||
<div>
|
||||
<Group justify="space-between" mb="xs">
|
||||
<Text size="sm" fw={500}>Output:</Text>
|
||||
<Tooltip label="Copy output">
|
||||
<ActionIcon
|
||||
variant="light"
|
||||
size="sm"
|
||||
onClick={() => copyToClipboard(JSON.stringify(result.output || execution?.output, null, 2))}
|
||||
>
|
||||
<IconCopy size={14} />
|
||||
</ActionIcon>
|
||||
</Tooltip>
|
||||
</Group>
|
||||
<Paper bg="gray.1" p="sm">
|
||||
<Text size="sm" component="pre" style={{ whiteSpace: 'pre-wrap' }}>
|
||||
{JSON.stringify(result.output || execution?.output, null, 2)}
|
||||
</Text>
|
||||
</Paper>
|
||||
</div>
|
||||
)}
|
||||
|
||||
{/* Error */}
|
||||
{(result.error || execution?.error) && (
|
||||
<Alert color="red" mt="sm">
|
||||
<Text size="sm">{result.error || execution?.error}</Text>
|
||||
</Alert>
|
||||
)}
|
||||
|
||||
{/* Logs */}
|
||||
<div style={{ marginTop: '1rem' }}>
|
||||
<Group justify="space-between" mb="xs">
|
||||
<Group gap="xs">
|
||||
<Text size="sm" fw={500}>Logs:</Text>
|
||||
{autoRefreshLogs && (
|
||||
<Badge size="xs" color="blue" variant="light">
|
||||
Auto-refreshing
|
||||
</Badge>
|
||||
)}
|
||||
</Group>
|
||||
<Group gap="xs">
|
||||
{result.execution_id && (
|
||||
<Button
|
||||
size="xs"
|
||||
variant={autoRefreshLogs ? "filled" : "light"}
|
||||
color={autoRefreshLogs ? "red" : "blue"}
|
||||
leftSection={<IconRefresh size={12} />}
|
||||
onClick={() => {
|
||||
if (autoRefreshLogs) {
|
||||
stopLogsAutoRefresh();
|
||||
} else {
|
||||
startLogsAutoRefresh(result.execution_id);
|
||||
}
|
||||
}}
|
||||
>
|
||||
{autoRefreshLogs ? 'Stop Auto-refresh' : 'Auto-refresh'}
|
||||
</Button>
|
||||
)}
|
||||
<Button
|
||||
size="xs"
|
||||
variant="light"
|
||||
leftSection={<IconRefresh size={12} />}
|
||||
onClick={() => result.execution_id && loadLogs(result.execution_id)}
|
||||
loading={loadingLogs}
|
||||
disabled={autoRefreshLogs}
|
||||
>
|
||||
Manual Refresh
|
||||
</Button>
|
||||
</Group>
|
||||
</Group>
|
||||
<Paper bg="gray.9" p="sm" mah={200} style={{ overflow: 'auto' }}>
|
||||
{loadingLogs ? (
|
||||
<Group justify="center">
|
||||
<Loader size="sm" />
|
||||
</Group>
|
||||
) : (logs.length > 0 || (execution?.logs && execution.logs.length > 0)) ? (
|
||||
<Text size="xs" c="white" component="pre">
|
||||
{(execution?.logs || logs).join('\n')}
|
||||
</Text>
|
||||
) : (
|
||||
<Text size="xs" c="gray.5">No logs available</Text>
|
||||
)}
|
||||
</Paper>
|
||||
</div>
|
||||
</Paper>
|
||||
</>
|
||||
)}
|
||||
</Stack>
|
||||
</Box>
|
||||
</ScrollArea>
|
||||
</Paper>
|
||||
);
|
||||
};
|
||||
@ -1,472 +0,0 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import {
|
||||
Modal,
|
||||
TextInput,
|
||||
Select,
|
||||
NumberInput,
|
||||
Button,
|
||||
Group,
|
||||
Stack,
|
||||
Text,
|
||||
Paper,
|
||||
Divider,
|
||||
JsonInput,
|
||||
Box,
|
||||
} from '@mantine/core';
|
||||
import { useForm } from '@mantine/form';
|
||||
import { notifications } from '@mantine/notifications';
|
||||
import Editor from '@monaco-editor/react';
|
||||
import { functionApi, runtimeApi } from '../services/apiService';
|
||||
import { FunctionDefinition, CreateFunctionRequest, UpdateFunctionRequest, RuntimeType } from '../types';
|
||||
|
||||
interface FunctionFormProps {
|
||||
opened: boolean;
|
||||
onClose: () => void;
|
||||
onSuccess: () => void;
|
||||
editFunction?: FunctionDefinition;
|
||||
}
|
||||
|
||||
export const FunctionForm: React.FC<FunctionFormProps> = ({
|
||||
opened,
|
||||
onClose,
|
||||
onSuccess,
|
||||
editFunction,
|
||||
}) => {
|
||||
const isEditing = !!editFunction;
|
||||
const [runtimeOptions, setRuntimeOptions] = useState<Array<{value: string; label: string}>>([]);
|
||||
|
||||
// Default images for each runtime
|
||||
const DEFAULT_IMAGES: Record<string, string> = {
|
||||
'nodejs18': 'node:18-alpine',
|
||||
'python3.9': 'python:3.9-alpine',
|
||||
'go1.20': 'golang:1.20-alpine',
|
||||
};
|
||||
|
||||
// Map runtime to Monaco editor language
|
||||
const getEditorLanguage = (runtime: string): string => {
|
||||
const languageMap: Record<string, string> = {
|
||||
'nodejs18': 'javascript',
|
||||
'python3.9': 'python',
|
||||
'go1.20': 'go',
|
||||
};
|
||||
return languageMap[runtime] || 'javascript';
|
||||
};
|
||||
|
||||
// Get default code template based on runtime
|
||||
const getDefaultCode = (runtime: string): string => {
|
||||
const templates: Record<string, string> = {
|
||||
'nodejs18': `exports.handler = async (event, context) => {
|
||||
console.log('Event:', JSON.stringify(event, null, 2));
|
||||
|
||||
return {
|
||||
statusCode: 200,
|
||||
body: JSON.stringify({
|
||||
message: 'Hello from Node.js!',
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
};
|
||||
};`,
|
||||
'python3.9': `import json
|
||||
from datetime import datetime
|
||||
|
||||
def handler(event, context):
|
||||
print('Event:', json.dumps(event, indent=2))
|
||||
|
||||
return {
|
||||
'statusCode': 200,
|
||||
'body': json.dumps({
|
||||
'message': 'Hello from Python!',
|
||||
'timestamp': datetime.now().isoformat()
|
||||
})
|
||||
}`,
|
||||
'go1.20': `package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"log"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Event map[string]interface{}
|
||||
type Response struct {
|
||||
StatusCode int \`json:"statusCode"\`
|
||||
Body string \`json:"body"\`
|
||||
}
|
||||
|
||||
func Handler(ctx context.Context, event Event) (Response, error) {
|
||||
eventJSON, _ := json.MarshalIndent(event, "", " ")
|
||||
log.Printf("Event: %s", eventJSON)
|
||||
|
||||
body := map[string]interface{}{
|
||||
"message": "Hello from Go!",
|
||||
"timestamp": time.Now().Format(time.RFC3339),
|
||||
}
|
||||
|
||||
bodyJSON, _ := json.Marshal(body)
|
||||
|
||||
return Response{
|
||||
StatusCode: 200,
|
||||
Body: string(bodyJSON),
|
||||
}, nil
|
||||
}`
|
||||
};
|
||||
return templates[runtime] || templates['nodejs18'];
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
// Fetch available runtimes from backend
|
||||
const fetchRuntimes = async () => {
|
||||
try {
|
||||
const response = await runtimeApi.getRuntimes();
|
||||
setRuntimeOptions(response.data.runtimes || []);
|
||||
} catch (error) {
|
||||
console.error('Failed to fetch runtimes:', error);
|
||||
// Fallback to default options
|
||||
setRuntimeOptions([
|
||||
{ value: 'nodejs18', label: 'Node.js 18.x' },
|
||||
{ value: 'python3.9', label: 'Python 3.9' },
|
||||
{ value: 'go1.20', label: 'Go 1.20' },
|
||||
]);
|
||||
}
|
||||
};
|
||||
|
||||
if (opened) {
|
||||
fetchRuntimes();
|
||||
}
|
||||
}, [opened]);
|
||||
|
||||
// Update form values when editFunction changes
|
||||
useEffect(() => {
|
||||
if (editFunction) {
|
||||
form.setValues({
|
||||
name: editFunction.name || '',
|
||||
app_id: editFunction.app_id || 'default',
|
||||
runtime: editFunction.runtime || 'nodejs18' as RuntimeType,
|
||||
image: editFunction.image || DEFAULT_IMAGES['nodejs18'] || '',
|
||||
handler: editFunction.handler || 'index.handler',
|
||||
code: editFunction.code || '',
|
||||
environment: editFunction.environment ? JSON.stringify(editFunction.environment, null, 2) : '{}',
|
||||
timeout: editFunction.timeout || '30s',
|
||||
memory: editFunction.memory || 128,
|
||||
owner: {
|
||||
type: editFunction.owner?.type || 'team' as const,
|
||||
name: editFunction.owner?.name || 'FaaS Team',
|
||||
owner: editFunction.owner?.owner || 'admin@example.com',
|
||||
},
|
||||
});
|
||||
} else {
|
||||
// Reset to default values when not editing
|
||||
form.setValues({
|
||||
name: '',
|
||||
app_id: 'default',
|
||||
runtime: 'nodejs18' as RuntimeType,
|
||||
image: DEFAULT_IMAGES['nodejs18'] || '',
|
||||
handler: 'index.handler',
|
||||
code: getDefaultCode('nodejs18'),
|
||||
environment: '{}',
|
||||
timeout: '30s',
|
||||
memory: 128,
|
||||
owner: {
|
||||
type: 'team' as const,
|
||||
name: 'FaaS Team',
|
||||
owner: 'admin@example.com',
|
||||
},
|
||||
});
|
||||
}
|
||||
}, [editFunction, opened]);
|
||||
|
||||
const form = useForm({
|
||||
initialValues: {
|
||||
name: '',
|
||||
app_id: 'default',
|
||||
runtime: 'nodejs18' as RuntimeType,
|
||||
image: DEFAULT_IMAGES['nodejs18'] || '',
|
||||
handler: 'index.handler',
|
||||
code: getDefaultCode('nodejs18'),
|
||||
environment: '{}',
|
||||
timeout: '30s',
|
||||
memory: 128,
|
||||
owner: {
|
||||
type: 'team' as const,
|
||||
name: 'FaaS Team',
|
||||
owner: 'admin@example.com',
|
||||
},
|
||||
},
|
||||
validate: {
|
||||
name: (value) => value.length < 1 ? 'Name is required' : null,
|
||||
app_id: (value) => value.length < 1 ? 'App ID is required' : null,
|
||||
runtime: (value) => !value ? 'Runtime is required' : null,
|
||||
image: (value) => value.length < 1 ? 'Image is required' : null,
|
||||
handler: (value) => value.length < 1 ? 'Handler is required' : null,
|
||||
timeout: (value) => !value.match(/^\d+[smh]$/) ? 'Timeout must be in format like 30s, 5m, 1h' : null,
|
||||
memory: (value) => value < 64 || value > 3008 ? 'Memory must be between 64 and 3008 MB' : null,
|
||||
},
|
||||
});
|
||||
|
||||
const handleRuntimeChange = (runtime: string | null) => {
|
||||
if (runtime && DEFAULT_IMAGES[runtime]) {
|
||||
form.setFieldValue('image', DEFAULT_IMAGES[runtime]);
|
||||
}
|
||||
form.setFieldValue('runtime', runtime as RuntimeType);
|
||||
|
||||
// If creating a new function and no code is set, provide default template
|
||||
if (!isEditing && runtime && (!form.values.code || form.values.code.trim() === '')) {
|
||||
form.setFieldValue('code', getDefaultCode(runtime));
|
||||
}
|
||||
};
|
||||
|
||||
const handleSubmit = async (values: typeof form.values) => {
|
||||
console.log('handleSubmit called with values:', values);
|
||||
console.log('Form validation errors:', form.errors);
|
||||
console.log('Is form valid?', form.isValid());
|
||||
|
||||
// Check each field individually
|
||||
const fieldNames = ['name', 'app_id', 'runtime', 'image', 'handler', 'timeout', 'memory'];
|
||||
fieldNames.forEach(field => {
|
||||
const error = form.validateField(field);
|
||||
console.log(`Field ${field} error:`, error);
|
||||
});
|
||||
|
||||
if (!form.isValid()) {
|
||||
console.log('Form is not valid, validation errors:', form.errors);
|
||||
return;
|
||||
}
|
||||
try {
|
||||
// Parse environment variables JSON
|
||||
let parsedEnvironment;
|
||||
try {
|
||||
parsedEnvironment = values.environment ? JSON.parse(values.environment) : undefined;
|
||||
} catch (error) {
|
||||
console.error('Error parsing environment variables:', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: 'Invalid JSON in environment variables',
|
||||
color: 'red',
|
||||
});
|
||||
return;
|
||||
}
|
||||
if (isEditing && editFunction) {
|
||||
const updateData: UpdateFunctionRequest = {
|
||||
name: values.name,
|
||||
runtime: values.runtime,
|
||||
image: values.image,
|
||||
handler: values.handler,
|
||||
code: values.code || undefined,
|
||||
environment: parsedEnvironment,
|
||||
timeout: values.timeout,
|
||||
memory: values.memory,
|
||||
owner: values.owner,
|
||||
};
|
||||
await functionApi.update(editFunction.id, updateData);
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: 'Function updated successfully',
|
||||
color: 'green',
|
||||
});
|
||||
} else {
|
||||
const createData: CreateFunctionRequest = {
|
||||
name: values.name,
|
||||
app_id: values.app_id,
|
||||
runtime: values.runtime,
|
||||
image: values.image,
|
||||
handler: values.handler,
|
||||
code: values.code || undefined,
|
||||
environment: parsedEnvironment,
|
||||
timeout: values.timeout,
|
||||
memory: values.memory,
|
||||
owner: values.owner,
|
||||
};
|
||||
await functionApi.create(createData);
|
||||
notifications.show({
|
||||
title: 'Success',
|
||||
message: 'Function created successfully',
|
||||
color: 'green',
|
||||
});
|
||||
}
|
||||
onSuccess();
|
||||
onClose();
|
||||
form.reset();
|
||||
} catch (error) {
|
||||
console.error('Error saving function:', error);
|
||||
notifications.show({
|
||||
title: 'Error',
|
||||
message: `Failed to ${isEditing ? 'update' : 'create'} function`,
|
||||
color: 'red',
|
||||
});
|
||||
}
|
||||
};
|
||||
|
||||
return (
|
||||
<Modal
|
||||
opened={opened}
|
||||
onClose={onClose}
|
||||
title={isEditing ? 'Edit Function' : 'Create Function'}
|
||||
size="lg"
|
||||
>
|
||||
<form onSubmit={(e) => {
|
||||
console.log('Form submit event triggered');
|
||||
console.log('Form values:', form.values);
|
||||
console.log('Form errors:', form.errors);
|
||||
console.log('Is form valid?', form.isValid());
|
||||
const result = form.onSubmit(handleSubmit)(e);
|
||||
console.log('Form onSubmit result:', result);
|
||||
return result;
|
||||
}}>
|
||||
<Stack gap="md">
|
||||
<Group grow>
|
||||
<TextInput
|
||||
label="Function Name"
|
||||
placeholder="my-function"
|
||||
required
|
||||
{...form.getInputProps('name')}
|
||||
/>
|
||||
<TextInput
|
||||
label="App ID"
|
||||
placeholder="my-app"
|
||||
required
|
||||
disabled={isEditing}
|
||||
{...form.getInputProps('app_id')}
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<Group grow>
|
||||
<Select
|
||||
label="Runtime"
|
||||
placeholder="Select runtime"
|
||||
required
|
||||
data={runtimeOptions}
|
||||
{...form.getInputProps('runtime')}
|
||||
onChange={handleRuntimeChange}
|
||||
/>
|
||||
<NumberInput
|
||||
label="Memory (MB)"
|
||||
placeholder="128"
|
||||
required
|
||||
min={64}
|
||||
max={3008}
|
||||
{...form.getInputProps('memory')}
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<Group grow>
|
||||
<TextInput
|
||||
label="Timeout"
|
||||
placeholder="30s"
|
||||
required
|
||||
{...form.getInputProps('timeout')}
|
||||
/>
|
||||
</Group>
|
||||
|
||||
<TextInput
|
||||
label="Handler"
|
||||
description="The entry point for your function (e.g., 'index.handler' means handler function in index file)"
|
||||
placeholder="index.handler"
|
||||
required
|
||||
{...form.getInputProps('handler')}
|
||||
/>
|
||||
|
||||
<Box>
|
||||
<Text size="sm" fw={500} mb={5}>
|
||||
Function Code
|
||||
</Text>
|
||||
<Box
|
||||
style={{
|
||||
border: '1px solid #dee2e6',
|
||||
borderRadius: '4px',
|
||||
overflow: 'hidden'
|
||||
}}
|
||||
>
|
||||
<Editor
|
||||
height="400px"
|
||||
language={getEditorLanguage(form.values.runtime)}
|
||||
value={form.values.code}
|
||||
onChange={(value) => form.setFieldValue('code', value || '')}
|
||||
options={{
|
||||
minimap: { enabled: false },
|
||||
scrollBeyondLastLine: false,
|
||||
fontSize: 14,
|
||||
lineNumbers: 'on',
|
||||
roundedSelection: false,
|
||||
scrollbar: {
|
||||
vertical: 'visible',
|
||||
horizontal: 'visible'
|
||||
},
|
||||
automaticLayout: true,
|
||||
wordWrap: 'on',
|
||||
tabSize: 2,
|
||||
insertSpaces: true,
|
||||
folding: true,
|
||||
lineDecorationsWidth: 0,
|
||||
lineNumbersMinChars: 3,
|
||||
renderLineHighlight: 'line',
|
||||
selectOnLineNumbers: true,
|
||||
theme: 'vs-light'
|
||||
}}
|
||||
loading={<Text ta="center" p="xl">Loading editor...</Text>}
|
||||
/>
|
||||
</Box>
|
||||
{form.errors.code && (
|
||||
<Text size="xs" c="red" mt={5}>
|
||||
{form.errors.code}
|
||||
</Text>
|
||||
)}
|
||||
</Box>
|
||||
|
||||
<JsonInput
|
||||
label="Environment Variables"
|
||||
description="JSON object with key-value pairs that will be available in your function runtime"
|
||||
placeholder={`{
|
||||
"NODE_ENV": "production",
|
||||
"API_URL": "https://api.example.com",
|
||||
"DATABASE_HOST": "db.example.com",
|
||||
"LOG_LEVEL": "info"
|
||||
}`}
|
||||
validationError="Invalid JSON - please check your syntax"
|
||||
formatOnBlur
|
||||
autosize
|
||||
minRows={4}
|
||||
{...form.getInputProps('environment')}
|
||||
/>
|
||||
|
||||
<Paper withBorder p="md" bg="gray.0">
|
||||
<Text size="sm" fw={500} mb="xs">Owner Information</Text>
|
||||
<Group grow>
|
||||
<Select
|
||||
label="Owner Type"
|
||||
data={[
|
||||
{ value: 'individual', label: 'Individual' },
|
||||
{ value: 'team', label: 'Team' },
|
||||
]}
|
||||
{...form.getInputProps('owner.type')}
|
||||
/>
|
||||
<TextInput
|
||||
label="Owner Name"
|
||||
placeholder="Team Name"
|
||||
{...form.getInputProps('owner.name')}
|
||||
/>
|
||||
</Group>
|
||||
<TextInput
|
||||
label="Owner Email"
|
||||
placeholder="owner@example.com"
|
||||
mt="xs"
|
||||
{...form.getInputProps('owner.owner')}
|
||||
/>
|
||||
</Paper>
|
||||
|
||||
<Divider />
|
||||
|
||||
<Group justify="flex-end">
|
||||
<Button variant="light" onClick={onClose}>
|
||||
Cancel
|
||||
</Button>
|
||||
<Button type="submit">
|
||||
{isEditing ? 'Update' : 'Create'} Function
|
||||
</Button>
|
||||
</Group>
|
||||
</Stack>
|
||||
</form>
|
||||
</Modal>
|
||||
);
|
||||
};
|
||||
@ -1,125 +0,0 @@
|
||||
import React, { useEffect, useState } from 'react';
|
||||
import {
|
||||
DataTable,
|
||||
TableColumn,
|
||||
Badge,
|
||||
Group,
|
||||
Text,
|
||||
Stack
|
||||
} from '@skybridge/web-components';
|
||||
import {
|
||||
IconPlayerPlay,
|
||||
IconCode,
|
||||
} from '@tabler/icons-react';
|
||||
import { functionApi } from '../services/apiService';
|
||||
import { FunctionDefinition } from '../types';
|
||||
|
||||
interface FunctionListProps {
|
||||
onCreateFunction: () => void;
|
||||
onEditFunction: (func: FunctionDefinition) => void;
|
||||
onExecuteFunction: (func: FunctionDefinition) => void;
|
||||
}
|
||||
|
||||
export const FunctionList: React.FC<FunctionListProps> = ({
|
||||
onCreateFunction,
|
||||
onEditFunction,
|
||||
onExecuteFunction,
|
||||
}) => {
|
||||
const [functions, setFunctions] = useState<FunctionDefinition[]>([]);
|
||||
const [loading, setLoading] = useState(true);
|
||||
const [error, setError] = useState<string | null>(null);
|
||||
|
||||
const loadFunctions = async () => {
|
||||
try {
|
||||
setLoading(true);
|
||||
setError(null);
|
||||
const data = await functionApi.listFunctions();
|
||||
setFunctions(data);
|
||||
} catch (error) {
|
||||
console.error('Failed to load functions:', error);
|
||||
setError('Failed to load functions');
|
||||
} finally {
|
||||
setLoading(false);
|
||||
}
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
loadFunctions();
|
||||
}, []);
|
||||
|
||||
const handleDelete = async (func: FunctionDefinition) => {
|
||||
await functionApi.deleteFunction(func.id);
|
||||
loadFunctions();
|
||||
};
|
||||
|
||||
const getStatusColor = (status: string) => {
|
||||
switch (status) {
|
||||
case 'active': return 'green';
|
||||
case 'inactive': return 'gray';
|
||||
case 'error': return 'red';
|
||||
case 'building': return 'yellow';
|
||||
default: return 'blue';
|
||||
}
|
||||
};
|
||||
|
||||
const columns: TableColumn[] = [
|
||||
{
|
||||
key: 'name',
|
||||
label: 'Function Name',
|
||||
sortable: true,
|
||||
render: (value, func: FunctionDefinition) => (
|
||||
<Group gap="xs">
|
||||
<IconCode size={16} />
|
||||
<Text fw={500}>{value}</Text>
|
||||
</Group>
|
||||
)
|
||||
},
|
||||
{
|
||||
key: 'runtime',
|
||||
label: 'Runtime',
|
||||
render: (value) => (
|
||||
<Badge variant="light" size="sm">{value}</Badge>
|
||||
)
|
||||
},
|
||||
{
|
||||
key: 'status',
|
||||
label: 'Status',
|
||||
render: (value) => (
|
||||
<Badge color={getStatusColor(value)} size="sm">{value}</Badge>
|
||||
)
|
||||
},
|
||||
{
|
||||
key: 'created_at',
|
||||
label: 'Created',
|
||||
render: (value) => new Date(value).toLocaleDateString()
|
||||
},
|
||||
];
|
||||
|
||||
const customActions = [
|
||||
{
|
||||
key: 'execute',
|
||||
label: 'Execute',
|
||||
icon: <IconPlayerPlay size={14} />,
|
||||
onClick: (func: FunctionDefinition) => onExecuteFunction(func),
|
||||
},
|
||||
];
|
||||
|
||||
return (
|
||||
<Stack gap="md">
|
||||
<DataTable
|
||||
data={functions}
|
||||
columns={columns}
|
||||
loading={loading}
|
||||
error={error}
|
||||
title="Functions"
|
||||
searchable
|
||||
onAdd={onCreateFunction}
|
||||
onEdit={onEditFunction}
|
||||
onDelete={handleDelete}
|
||||
onRefresh={loadFunctions}
|
||||
customActions={customActions}
|
||||
emptyMessage="No functions found"
|
||||
/>
|
||||
</Stack>
|
||||
);
|
||||
};
|
||||
@ -1,271 +0,0 @@
|
||||
import React, { useState, useEffect } from 'react';
|
||||
import {
|
||||
Paper,
|
||||
Stack,
|
||||
Text,
|
||||
Divider,
|
||||
Box,
|
||||
ScrollArea,
|
||||
Group,
|
||||
Title,
|
||||
ActionIcon,
|
||||
} from '@mantine/core';
|
||||
import { IconX } from '@tabler/icons-react';
|
||||
import {
|
||||
FormSidebar,
|
||||
FormField
|
||||
} from '@skybridge/web-components';
|
||||
import Editor from '@monaco-editor/react';
|
||||
import { functionApi, runtimeApi } from '../services/apiService';
|
||||
import { FunctionDefinition, CreateFunctionRequest, UpdateFunctionRequest, RuntimeType } from '../types';
|
||||
|
||||
interface FunctionSidebarProps {
|
||||
opened: boolean;
|
||||
onClose: () => void;
|
||||
onSuccess: () => void;
|
||||
editFunction?: FunctionDefinition;
|
||||
}
|
||||
|
||||
export const FunctionSidebar: React.FC<FunctionSidebarProps> = ({
|
||||
opened,
|
||||
onClose,
|
||||
onSuccess,
|
||||
editFunction,
|
||||
}) => {
|
||||
const [runtimeOptions, setRuntimeOptions] = useState<Array<{value: string; label: string}>>([]);
|
||||
const [codeContent, setCodeContent] = useState('');
|
||||
|
||||
// Default images for each runtime
|
||||
const DEFAULT_IMAGES: Record<string, string> = {
|
||||
'nodejs18': 'node:18-alpine',
|
||||
'python3.9': 'python:3.9-alpine',
|
||||
'go1.20': 'golang:1.20-alpine',
|
||||
};
|
||||
|
||||
// Map runtime to Monaco editor language
|
||||
const getEditorLanguage = (runtime: string): string => {
|
||||
const languageMap: Record<string, string> = {
|
||||
'nodejs18': 'javascript',
|
||||
'python3.9': 'python',
|
||||
'go1.20': 'go',
|
||||
};
|
||||
return languageMap[runtime] || 'javascript';
|
||||
};
|
||||
|
||||
// Get default code template based on runtime
|
||||
const getDefaultCode = (runtime: string): string => {
|
||||
const templates: Record<string, string> = {
|
||||
'nodejs18': `exports.handler = async (event, context) => {
|
||||
console.log('Event:', JSON.stringify(event, null, 2));
|
||||
|
||||
return {
|
||||
statusCode: 200,
|
||||
body: JSON.stringify({
|
||||
message: 'Hello from Node.js!',
|
||||
timestamp: new Date().toISOString()
|
||||
})
|
||||
};
|
||||
};`,
|
||||
'python3.9': `import json
|
||||
from datetime import datetime
|
||||
|
||||
def handler(event, context):
|
||||
print('Event:', json.dumps(event, indent=2))
|
||||
|
||||
return {
|
||||
'statusCode': 200,
|
||||
'body': json.dumps({
|
||||
'message': 'Hello from Python!',
|
||||
'timestamp': datetime.now().isoformat()
|
||||
})
|
||||
}`,
|
||||
'go1.20': `package main
|
||||
|
||||
import (
|
||||
"context"
|
||||
"encoding/json"
|
||||
"fmt"
|
||||
"time"
|
||||
)
|
||||
|
||||
type Event map[string]interface{}
|
||||
|
||||
func Handler(ctx context.Context, event Event) (map[string]interface{}, error) {
|
||||
fmt.Printf("Event: %+v\\n", event)
|
||||
|
||||
return map[string]interface{}{
|
||||
"statusCode": 200,
|
||||
"body": map[string]interface{}{
|
||||
"message": "Hello from Go!",
|
||||
"timestamp": time.Now().Format(time.RFC3339),
|
||||
},
|
||||
}, nil
|
||||
}`
|
||||
};
|
||||
return templates[runtime] || templates['nodejs18'];
|
||||
};
|
||||
|
||||
useEffect(() => {
|
||||
loadRuntimeOptions();
|
||||
if (editFunction) {
|
||||
setCodeContent(editFunction.code || '');
|
||||
}
|
||||
}, [editFunction]);
|
||||
|
||||
const loadRuntimeOptions = async () => {
|
||||
try {
|
||||
const runtimes = await runtimeApi.listRuntimes();
|
||||
const options = runtimes.map((runtime: RuntimeType) => ({
|
||||
value: runtime.name,
|
||||
label: `${runtime.name} (${runtime.version})`
|
||||
}));
|
||||
setRuntimeOptions(options);
|
||||
} catch (error) {
|
||||
console.error('Failed to load runtimes:', error);
|
||||
// Fallback options
|
||||
setRuntimeOptions([
|
||||
{ value: 'nodejs18', label: 'Node.js 18' },
|
||||
{ value: 'python3.9', label: 'Python 3.9' },
|
||||
{ value: 'go1.20', label: 'Go 1.20' }
|
||||
]);
|
||||
}
|
||||
};
|
||||
|
||||
const fields: FormField[] = [
|
||||
{
|
||||
name: 'name',
|
||||
label: 'Function Name',
|
||||
type: 'text',
|
||||
required: true,
|
||||
placeholder: 'my-function',
|
||||
validation: { pattern: /^[a-z0-9-]+$/ },
|
||||
},
|
||||
{
|
||||
name: 'description',
|
||||
label: 'Description',
|
||||
type: 'textarea',
|
||||
required: false,
|
||||
placeholder: 'Function description...',
|
||||
},
|
||||
{
|
||||
name: 'runtime',
|
||||
label: 'Runtime',
|
||||
type: 'select',
|
||||
required: true,
|
||||
options: runtimeOptions,
|
||||
defaultValue: 'nodejs18',
|
||||
},
|
||||
{
|
||||
name: 'timeout',
|
||||
label: 'Timeout (seconds)',
|
||||
type: 'number',
|
||||
required: true,
|
||||
defaultValue: 30,
|
||||
},
|
||||
{
|
||||
name: 'memory',
|
||||
label: 'Memory (MB)',
|
||||
type: 'number',
|
||||
required: true,
|
||||
defaultValue: 128,
|
||||
},
|
||||
{
|
||||
name: 'environment_variables',
|
||||
label: 'Environment Variables',
|
||||
type: 'json',
|
||||
required: false,
|
||||
defaultValue: '{}',
|
||||
},
|
||||
];
|
||||
|
||||
const handleSubmit = async (values: any) => {
|
||||
const submitData = {
|
||||
...values,
|
||||
code: codeContent,
|
||||
docker_image: DEFAULT_IMAGES[values.runtime] || DEFAULT_IMAGES['nodejs18'],
|
||||
};
|
||||
|
||||
if (editFunction) {
|
||||
const updateRequest: UpdateFunctionRequest = {
|
||||
description: submitData.description,
|
||||
code: submitData.code,
|
||||
timeout: submitData.timeout,
|
||||
memory: submitData.memory,
|
||||
environment_variables: submitData.environment_variables,
|
||||
docker_image: submitData.docker_image,
|
||||
};
|
||||
await functionApi.updateFunction(editFunction.id, updateRequest);
|
||||
} else {
|
||||
const createRequest: CreateFunctionRequest = submitData;
|
||||
await functionApi.createFunction(createRequest);
|
||||
}
|
||||
};
|
||||
|
||||
// Create a sidebar that works with SidebarLayout
|
||||
if (!opened) return null;
|
||||
|
||||
return (
|
||||
<Paper
|
||||
style={{
|
||||
height: '100%',
|
||||
borderRadius: 0,
|
||||
display: 'flex',
|
||||
flexDirection: 'column',
|
||||
borderLeft: '1px solid var(--mantine-color-gray-3)',
|
||||
backgroundColor: 'var(--mantine-color-body)',
|
||||
}}
|
||||
>
|
||||
{/* Header */}
|
||||
<Group justify="space-between" p="md" style={{ borderBottom: '1px solid var(--mantine-color-gray-3)' }}>
|
||||
<Title order={4}>
|
||||
{editFunction ? 'Edit Function' : 'Create Function'}
|
||||
</Title>
|
||||
<ActionIcon
|
||||
variant="subtle"
|
||||
color="gray"
|
||||
onClick={onClose}
|
||||
>
|
||||
<IconX size={18} />
|
||||
</ActionIcon>
|
||||
</Group>
|
||||
|
||||
{/* Content */}
|
||||
<ScrollArea style={{ flex: 1 }}>
|
||||
<Stack gap="md" p="md">
|
||||
<FormSidebar
|
||||
opened={true} // Always open since we're embedding it
|
||||
onClose={() => {}} // Handled by parent
|
||||
onSuccess={onSuccess}
|
||||
title="Function"
|
||||
editMode={!!editFunction}
|
||||
editItem={editFunction}
|
||||
fields={fields}
|
||||
onSubmit={handleSubmit}
|
||||
width={600}
|
||||
style={{ position: 'relative', right: 'auto', top: 'auto', bottom: 'auto' }}
|
||||
/>
|
||||
|
||||
<Divider />
|
||||
|
||||
<Box>
|
||||
<Text fw={500} mb="sm">Code Editor</Text>
|
||||
<Box h={300} style={{ border: '1px solid var(--mantine-color-gray-3)' }}>
|
||||
<Editor
|
||||
height="300px"
|
||||
language={getEditorLanguage(editFunction?.runtime || 'nodejs18')}
|
||||
value={codeContent}
|
||||
onChange={(value) => setCodeContent(value || '')}
|
||||
theme="vs-dark"
|
||||
options={{
|
||||
minimap: { enabled: false },
|
||||
scrollBeyondLastLine: false,
|
||||
fontSize: 14,
|
||||
}}
|
||||
/>
|
||||
</Box>
|
||||
</Box>
|
||||
</Stack>
|
||||
</ScrollArea>
|
||||
</Paper>
|
||||
);
|
||||
};
|
||||
@ -1,21 +0,0 @@
|
||||
import React from 'react';
|
||||
import ReactDOM from 'react-dom/client';
|
||||
import { BrowserRouter } from 'react-router-dom';
|
||||
import { MantineProvider } from '@mantine/core';
|
||||
import { Notifications } from '@mantine/notifications';
|
||||
import App from './App';
|
||||
|
||||
const root = ReactDOM.createRoot(
|
||||
document.getElementById('root') as HTMLElement
|
||||
);
|
||||
|
||||
root.render(
|
||||
<React.StrictMode>
|
||||
<MantineProvider>
|
||||
<Notifications />
|
||||
<BrowserRouter>
|
||||
<App />
|
||||
</BrowserRouter>
|
||||
</MantineProvider>
|
||||
</React.StrictMode>
|
||||
);
|
||||
@ -1,21 +0,0 @@
|
||||
import React from 'react';
|
||||
import ReactDOM from 'react-dom/client';
|
||||
import { BrowserRouter } from 'react-router-dom';
|
||||
import { MantineProvider } from '@mantine/core';
|
||||
import { Notifications } from '@mantine/notifications';
|
||||
import App from './App';
|
||||
|
||||
const root = ReactDOM.createRoot(
|
||||
document.getElementById('root') as HTMLElement
|
||||
);
|
||||
|
||||
root.render(
|
||||
<React.StrictMode>
|
||||
<MantineProvider>
|
||||
<Notifications />
|
||||
<BrowserRouter>
|
||||
<App />
|
||||
</BrowserRouter>
|
||||
</MantineProvider>
|
||||
</React.StrictMode>
|
||||
);
|
||||
@ -1,105 +0,0 @@
|
||||
import axios from 'axios';
|
||||
import {
|
||||
FunctionDefinition,
|
||||
FunctionExecution,
|
||||
CreateFunctionRequest,
|
||||
UpdateFunctionRequest,
|
||||
ExecuteFunctionRequest,
|
||||
ExecuteFunctionResponse,
|
||||
RuntimeInfo,
|
||||
} from '../types';
|
||||
|
||||
const API_BASE_URL = process.env.NODE_ENV === 'production'
|
||||
? '/api/faas/api'
|
||||
: 'http://localhost:8083/api';
|
||||
|
||||
const api = axios.create({
|
||||
baseURL: API_BASE_URL,
|
||||
headers: {
|
||||
'Content-Type': 'application/json',
|
||||
'X-User-Email': 'admin@example.com', // Mock auth header
|
||||
},
|
||||
});
|
||||
|
||||
// Add response interceptor for error handling
|
||||
api.interceptors.response.use(
|
||||
(response) => response,
|
||||
(error) => {
|
||||
console.error('API Error:', error.response?.data || error.message);
|
||||
return Promise.reject(error);
|
||||
}
|
||||
);
|
||||
|
||||
export const functionApi = {
|
||||
// Function management
|
||||
list: (appId?: string, limit = 50, offset = 0) =>
|
||||
api.get<{ functions: FunctionDefinition[] }>('/functions', {
|
||||
params: { app_id: appId, limit, offset },
|
||||
}),
|
||||
|
||||
create: (data: CreateFunctionRequest) => {
|
||||
console.log('Making API call to create function with data:', data);
|
||||
return api.post<FunctionDefinition>('/functions', data);
|
||||
},
|
||||
|
||||
getById: (id: string) =>
|
||||
api.get<FunctionDefinition>(`/functions/${id}`),
|
||||
|
||||
update: (id: string, data: UpdateFunctionRequest) =>
|
||||
api.put<FunctionDefinition>(`/functions/${id}`, data),
|
||||
|
||||
delete: (id: string) =>
|
||||
api.delete(`/functions/${id}`),
|
||||
|
||||
deploy: (id: string, force = false) =>
|
||||
api.post(`/functions/${id}/deploy`, { force }),
|
||||
|
||||
// Function execution
|
||||
execute: (id: string, data: Omit<ExecuteFunctionRequest, 'function_id'>) =>
|
||||
api.post<ExecuteFunctionResponse>(`/functions/${id}/execute`, data),
|
||||
|
||||
invoke: (id: string, data?: { input?: any }) =>
|
||||
api.post<ExecuteFunctionResponse>(`/functions/${id}/invoke`, data),
|
||||
};
|
||||
|
||||
export const executionApi = {
|
||||
// Execution management
|
||||
list: (functionId?: string, limit = 50, offset = 0) =>
|
||||
api.get<{ executions: FunctionExecution[] }>('/executions', {
|
||||
params: { function_id: functionId, limit, offset },
|
||||
}),
|
||||
|
||||
getById: (id: string) =>
|
||||
api.get<FunctionExecution>(`/executions/${id}`),
|
||||
|
||||
cancel: (id: string) =>
|
||||
api.delete(`/executions/${id}`),
|
||||
|
||||
getLogs: (id: string) => {
|
||||
console.debug(`[API] Fetching logs for execution ${id}`);
|
||||
return api.get<{ logs: string[] }>(`/executions/${id}/logs`)
|
||||
.then(response => {
|
||||
console.debug(`[API] Successfully fetched logs for execution ${id}:`, {
|
||||
logCount: response.data.logs?.length || 0,
|
||||
logs: response.data.logs
|
||||
});
|
||||
return response;
|
||||
})
|
||||
.catch(error => {
|
||||
console.error(`[API] Failed to fetch logs for execution ${id}:`, error);
|
||||
throw error;
|
||||
});
|
||||
},
|
||||
|
||||
getRunning: () =>
|
||||
api.get<{ executions: FunctionExecution[]; count: number }>('/executions/running'),
|
||||
};
|
||||
|
||||
export const healthApi = {
|
||||
health: () => api.get('/health'),
|
||||
ready: () => api.get('/ready'),
|
||||
};
|
||||
|
||||
export const runtimeApi = {
|
||||
getRuntimes: () => api.get('/runtimes'),
|
||||
};
|
||||
@ -1,68 +0,0 @@
|
||||
export interface FunctionDefinition {
|
||||
id: string;
|
||||
name: string;
|
||||
description?: string;
|
||||
runtime: RuntimeType;
|
||||
code: string;
|
||||
status: 'active' | 'inactive';
|
||||
createdAt: string;
|
||||
updatedAt: string;
|
||||
tags?: string[];
|
||||
timeout?: number;
|
||||
memoryLimit?: number;
|
||||
image?: string;
|
||||
env_vars?: Record<string, string>;
|
||||
}
|
||||
|
||||
export interface FunctionExecution {
|
||||
id: string;
|
||||
function_id: string;
|
||||
input?: any;
|
||||
output?: any;
|
||||
error?: string;
|
||||
status: 'pending' | 'running' | 'completed' | 'failed' | 'timeout' | 'canceled';
|
||||
duration: number; // Duration in nanoseconds
|
||||
memory_used: number;
|
||||
container_id?: string;
|
||||
executor_id: string;
|
||||
created_at: string;
|
||||
started_at?: string;
|
||||
completed_at?: string;
|
||||
}
|
||||
|
||||
export type RuntimeType = 'nodejs18' | 'python3.9' | 'go1.20';
|
||||
|
||||
export interface CreateFunctionRequest {
|
||||
name: string;
|
||||
description?: string;
|
||||
runtime: RuntimeType;
|
||||
code: string;
|
||||
image?: string;
|
||||
timeout?: number;
|
||||
memory_limit?: number;
|
||||
env_vars?: Record<string, string>;
|
||||
tags?: string[];
|
||||
}
|
||||
|
||||
export interface UpdateFunctionRequest extends Partial<CreateFunctionRequest> {}
|
||||
|
||||
export interface ExecuteFunctionRequest {
|
||||
function_id: string;
|
||||
input?: any;
|
||||
async?: boolean;
|
||||
}
|
||||
|
||||
export interface ExecuteFunctionResponse {
|
||||
execution_id: string;
|
||||
output?: any;
|
||||
error?: string;
|
||||
status: 'pending' | 'running' | 'completed' | 'failed';
|
||||
duration?: number;
|
||||
}
|
||||
|
||||
export interface RuntimeInfo {
|
||||
runtime: RuntimeType;
|
||||
image: string;
|
||||
version: string;
|
||||
available: boolean;
|
||||
}
|
||||
@ -1,92 +0,0 @@
|
||||
export type RuntimeType = 'nodejs18' | 'python3.9' | 'go1.20' | 'custom';
|
||||
|
||||
export type ExecutionStatus = 'pending' | 'running' | 'completed' | 'failed' | 'timeout' | 'canceled';
|
||||
|
||||
export type OwnerType = 'individual' | 'team';
|
||||
|
||||
export interface Owner {
|
||||
type: OwnerType;
|
||||
name: string;
|
||||
owner: string;
|
||||
}
|
||||
|
||||
export interface FunctionDefinition {
|
||||
id: string;
|
||||
name: string;
|
||||
app_id: string;
|
||||
runtime: RuntimeType;
|
||||
image: string;
|
||||
handler: string;
|
||||
code?: string;
|
||||
environment?: Record<string, string>;
|
||||
timeout: string;
|
||||
memory: number;
|
||||
owner: Owner;
|
||||
created_at: string;
|
||||
updated_at: string;
|
||||
}
|
||||
|
||||
export interface FunctionExecution {
|
||||
id: string;
|
||||
function_id: string;
|
||||
status: ExecutionStatus;
|
||||
input?: any;
|
||||
output?: any;
|
||||
error?: string;
|
||||
duration?: number;
|
||||
memory_used?: number;
|
||||
logs?: string[];
|
||||
container_id?: string;
|
||||
executor_id: string;
|
||||
created_at: string;
|
||||
started_at?: string;
|
||||
completed_at?: string;
|
||||
}
|
||||
|
||||
export interface CreateFunctionRequest {
|
||||
name: string;
|
||||
app_id: string;
|
||||
runtime: RuntimeType;
|
||||
image: string;
|
||||
handler: string;
|
||||
code?: string;
|
||||
environment?: Record<string, string>;
|
||||
timeout: string;
|
||||
memory: number;
|
||||
owner: Owner;
|
||||
}
|
||||
|
||||
export interface UpdateFunctionRequest {
|
||||
name?: string;
|
||||
runtime?: RuntimeType;
|
||||
image?: string;
|
||||
handler?: string;
|
||||
code?: string;
|
||||
environment?: Record<string, string>;
|
||||
timeout?: string;
|
||||
memory?: number;
|
||||
owner?: Owner;
|
||||
}
|
||||
|
||||
export interface ExecuteFunctionRequest {
|
||||
function_id: string;
|
||||
input?: any;
|
||||
async?: boolean;
|
||||
}
|
||||
|
||||
export interface ExecuteFunctionResponse {
|
||||
execution_id: string;
|
||||
status: ExecutionStatus;
|
||||
output?: any;
|
||||
error?: string;
|
||||
duration?: number;
|
||||
memory_used?: number;
|
||||
}
|
||||
|
||||
export interface RuntimeInfo {
|
||||
type: RuntimeType;
|
||||
version: string;
|
||||
available: boolean;
|
||||
default_image: string;
|
||||
description: string;
|
||||
}
|
||||
@ -1,86 +0,0 @@
|
||||
const HtmlWebpackPlugin = require('html-webpack-plugin');
|
||||
const { ModuleFederationPlugin } = require('webpack').container;
|
||||
const webpack = require('webpack');
|
||||
|
||||
// Import the microfrontends registry
|
||||
const { getExposesConfig } = require('../../web/src/microfrontends.js');
|
||||
|
||||
module.exports = {
|
||||
mode: 'development',
|
||||
entry: './src/index.tsx',
|
||||
devServer: {
|
||||
port: 3003,
|
||||
headers: {
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
},
|
||||
},
|
||||
resolve: {
|
||||
extensions: ['.tsx', '.ts', '.js', '.jsx'],
|
||||
},
|
||||
module: {
|
||||
rules: [
|
||||
{
|
||||
test: /\.(js|jsx|ts|tsx)$/,
|
||||
exclude: /node_modules/,
|
||||
use: {
|
||||
loader: 'babel-loader',
|
||||
options: {
|
||||
presets: [
|
||||
'@babel/preset-react',
|
||||
'@babel/preset-typescript',
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
test: /\.css$/,
|
||||
use: ['style-loader', 'css-loader'],
|
||||
},
|
||||
],
|
||||
},
|
||||
plugins: [
|
||||
new ModuleFederationPlugin({
|
||||
name: 'faas',
|
||||
filename: 'remoteEntry.js',
|
||||
exposes: getExposesConfig('faas'),
|
||||
shared: {
|
||||
react: {
|
||||
singleton: true,
|
||||
requiredVersion: '^18.2.0',
|
||||
eager: false,
|
||||
},
|
||||
'react-dom': {
|
||||
singleton: true,
|
||||
requiredVersion: '^18.2.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/core': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/hooks': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/notifications': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@tabler/icons-react': {
|
||||
singleton: true,
|
||||
requiredVersion: '^2.40.0',
|
||||
eager: false,
|
||||
},
|
||||
},
|
||||
}),
|
||||
new HtmlWebpackPlugin({
|
||||
template: './public/index.html',
|
||||
}),
|
||||
new webpack.DefinePlugin({
|
||||
'process.env': JSON.stringify(process.env),
|
||||
}),
|
||||
],
|
||||
};
|
||||
@ -1,85 +0,0 @@
|
||||
const HtmlWebpackPlugin = require('html-webpack-plugin');
|
||||
const { ModuleFederationPlugin } = require('webpack').container;
|
||||
const webpack = require('webpack');
|
||||
|
||||
module.exports = {
|
||||
mode: 'development',
|
||||
entry: './src/index.tsx',
|
||||
devServer: {
|
||||
port: 3003,
|
||||
headers: {
|
||||
'Access-Control-Allow-Origin': '*',
|
||||
},
|
||||
},
|
||||
resolve: {
|
||||
extensions: ['.tsx', '.ts', '.js', '.jsx'],
|
||||
},
|
||||
module: {
|
||||
rules: [
|
||||
{
|
||||
test: /\.(js|jsx|ts|tsx)$/,
|
||||
exclude: /node_modules/,
|
||||
use: {
|
||||
loader: 'babel-loader',
|
||||
options: {
|
||||
presets: [
|
||||
'@babel/preset-react',
|
||||
'@babel/preset-typescript',
|
||||
],
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
test: /\.css$/,
|
||||
use: ['style-loader', 'css-loader'],
|
||||
},
|
||||
],
|
||||
},
|
||||
plugins: [
|
||||
new ModuleFederationPlugin({
|
||||
name: 'faas',
|
||||
filename: 'remoteEntry.js',
|
||||
exposes: {
|
||||
'./App': './src/App.tsx',
|
||||
},
|
||||
shared: {
|
||||
react: {
|
||||
singleton: true,
|
||||
requiredVersion: '^18.2.0',
|
||||
eager: false,
|
||||
},
|
||||
'react-dom': {
|
||||
singleton: true,
|
||||
requiredVersion: '^18.2.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/core': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/hooks': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@mantine/notifications': {
|
||||
singleton: true,
|
||||
requiredVersion: '^7.0.0',
|
||||
eager: false,
|
||||
},
|
||||
'@tabler/icons-react': {
|
||||
singleton: true,
|
||||
requiredVersion: '^2.40.0',
|
||||
eager: false,
|
||||
},
|
||||
},
|
||||
}),
|
||||
new HtmlWebpackPlugin({
|
||||
template: './public/index.html',
|
||||
}),
|
||||
new webpack.DefinePlugin({
|
||||
'process.env': JSON.stringify(process.env),
|
||||
}),
|
||||
],
|
||||
};
|
||||
@ -1,17 +1,16 @@
|
||||
module github.com/RyanCopley/skybridge/kms
|
||||
module github.com/kms/api-key-service
|
||||
|
||||
go 1.23.0
|
||||
|
||||
toolchain go1.24.4
|
||||
|
||||
require (
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2
|
||||
github.com/gin-gonic/gin v1.9.1
|
||||
github.com/go-playground/validator/v10 v10.16.0
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0
|
||||
github.com/golang-migrate/migrate/v4 v4.16.2
|
||||
github.com/google/uuid v1.4.0
|
||||
github.com/gorilla/mux v1.7.4
|
||||
github.com/jmoiron/sqlx v1.4.0
|
||||
github.com/joho/godotenv v1.4.0
|
||||
github.com/lib/pq v1.10.9
|
||||
github.com/redis/go-redis/v9 v9.12.1
|
||||
@ -22,6 +21,7 @@ require (
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2 // indirect
|
||||
github.com/bytedance/sonic v1.9.1 // indirect
|
||||
github.com/cespare/xxhash/v2 v2.3.0 // indirect
|
||||
github.com/chenzhuoyu/base64x v0.0.0-20221115062448-fe3a3abad311 // indirect
|
||||
@ -32,7 +32,9 @@ require (
|
||||
github.com/go-playground/locales v0.14.1 // indirect
|
||||
github.com/go-playground/universal-translator v0.18.1 // indirect
|
||||
github.com/goccy/go-json v0.10.2 // indirect
|
||||
github.com/google/go-cmp v0.5.9 // indirect
|
||||
github.com/hashicorp/errwrap v1.1.0 // indirect
|
||||
github.com/hashicorp/go-multierror v1.1.1 // indirect
|
||||
github.com/jmoiron/sqlx v1.4.0 // indirect
|
||||
github.com/json-iterator/go v1.1.12 // indirect
|
||||
github.com/klauspost/cpuid/v2 v2.2.4 // indirect
|
||||
github.com/leodido/go-urn v1.2.4 // indirect
|
||||
@ -43,6 +45,7 @@ require (
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1 // indirect
|
||||
github.com/ugorji/go/codec v1.2.11 // indirect
|
||||
go.uber.org/atomic v1.7.0 // indirect
|
||||
go.uber.org/multierr v1.10.0 // indirect
|
||||
golang.org/x/arch v0.3.0 // indirect
|
||||
golang.org/x/net v0.10.0 // indirect
|
||||
@ -1,7 +1,10 @@
|
||||
filippo.io/edwards25519 v1.1.0 h1:FNf4tywRC1HmFuKW5xopWpigGjJKiJSV0Cqo0cJWDaA=
|
||||
filippo.io/edwards25519 v1.1.0/go.mod h1:BxyFTGdWcka3PhytdK4V28tE5sGfRvvvRV7EaN4VDT4=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161 h1:L/gRVlceqvL25UVaW/CKtUDjefjrs0SPonmDGUVOYP0=
|
||||
github.com/Azure/go-ansiterm v0.0.0-20230124172434-306776ec8161/go.mod h1:xomTg63KZ2rFqZQzSB4Vz2SUXa1BpHTVz9L5PTmPC4E=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2 h1:OcvFkGmslmlZibjAjaHm3L//6LiuBgolP7OputlJIzU=
|
||||
github.com/DATA-DOG/go-sqlmock v1.5.2/go.mod h1:88MAG/4G7SMwSE3CeA0ZKzrT5CiOU3OJ+JlNzwDqpNU=
|
||||
github.com/Microsoft/go-winio v0.6.1 h1:9/kr64B9VUZrLm5YYwbGtUJnMgqWVOdUAXu6Migciow=
|
||||
github.com/Microsoft/go-winio v0.6.1/go.mod h1:LRdKpFKfdobln8UmuiYcKPot9D2v6svN5+sAH+4kjUM=
|
||||
github.com/bsm/ginkgo/v2 v2.12.0 h1:Ny8MWAHyOepLGlLKYmXG4IEkioBysk6GpaRTLC8zwWs=
|
||||
github.com/bsm/ginkgo/v2 v2.12.0/go.mod h1:SwYbGRRDovPVboqFv0tPTcG1sN61LM1Z4ARdbAV9g4c=
|
||||
github.com/bsm/gomega v1.27.10 h1:yeMWxP2pV2fG3FgAODIY8EiRE3dy0aeFYt4l7wh6yKA=
|
||||
@ -19,6 +22,16 @@ github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c
|
||||
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f h1:lO4WD4F/rVNCu3HqELle0jiPLLBs70cWOduZpkS1E78=
|
||||
github.com/dgryski/go-rendezvous v0.0.0-20200823014737-9f7001d12a5f/go.mod h1:cuUVRXasLTGF7a8hSLbxyZXjz+1KgoB3wDUb6vlszIc=
|
||||
github.com/dhui/dktest v0.3.16 h1:i6gq2YQEtcrjKbeJpBkWjE8MmLZPYllcjOFbTZuPDnw=
|
||||
github.com/dhui/dktest v0.3.16/go.mod h1:gYaA3LRmM8Z4vJl2MA0THIigJoZrwOansEOsp+kqxp0=
|
||||
github.com/docker/distribution v2.8.2+incompatible h1:T3de5rq0dB1j30rp0sA2rER+m322EBzniBPB6ZIzuh8=
|
||||
github.com/docker/distribution v2.8.2+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
||||
github.com/docker/docker v20.10.24+incompatible h1:Ugvxm7a8+Gz6vqQYQQ2W7GYq5EUPaAiuPgIfVyI3dYE=
|
||||
github.com/docker/docker v20.10.24+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
|
||||
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
|
||||
github.com/docker/go-connections v0.4.0/go.mod h1:Gbd7IOopHjR8Iph03tsViu4nIes5XhDvyHbTtUxmeec=
|
||||
github.com/docker/go-units v0.5.0 h1:69rxXcBk27SvSaaxTtLh/8llcHD8vYHT7WSdRZ/jvr4=
|
||||
github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk=
|
||||
github.com/gabriel-vasile/mimetype v1.4.2 h1:w5qFW6JKBz9Y393Y4q372O9A7cUSequkh1Q7OhCmWKU=
|
||||
github.com/gabriel-vasile/mimetype v1.4.2/go.mod h1:zApsH/mKG4w07erKIaJPFiX0Tsq9BFQgN3qGY5GnNgA=
|
||||
github.com/gin-contrib/sse v0.1.0 h1:Y/yl/+YNO8GZSjAhjMsSuLt29uWRFHdHYUb5lYOV9qE=
|
||||
@ -33,12 +46,15 @@ github.com/go-playground/universal-translator v0.18.1 h1:Bcnm0ZwsGyWbCzImXv+pAJn
|
||||
github.com/go-playground/universal-translator v0.18.1/go.mod h1:xekY+UJKNuX9WP91TpwSH2VMlDf28Uj24BCp08ZFTUY=
|
||||
github.com/go-playground/validator/v10 v10.16.0 h1:x+plE831WK4vaKHO/jpgUGsvLKIqRRkz6M78GuJAfGE=
|
||||
github.com/go-playground/validator/v10 v10.16.0/go.mod h1:9iXMNT7sEkjXb0I+enO7QXmzG6QCsPWY4zveKFVRSyU=
|
||||
github.com/go-sql-driver/mysql v1.8.1 h1:LedoTUt/eveggdHS9qUFC1EFSa8bU2+1pZjSRpvNJ1Y=
|
||||
github.com/go-sql-driver/mysql v1.8.1/go.mod h1:wEBSXgmK//2ZFJyE+qWnIsVGmvmEKlqwuVSjsCm7DZg=
|
||||
github.com/goccy/go-json v0.10.2 h1:CrxCmQqYDkv1z7lO7Wbh2HN93uovUHgrECaO5ZrCXAU=
|
||||
github.com/goccy/go-json v0.10.2/go.mod h1:6MelG93GURQebXPDq3khkgXZkazVtN9CRI+MGFi0w8I=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0 h1:pv4AsKCKKZuqlgs5sUmn4x8UlGa0kEVt/puTpKx9vvo=
|
||||
github.com/golang-jwt/jwt/v5 v5.3.0/go.mod h1:fxCRLWMO43lRc8nhHWY6LGqRcf+1gQWArsqaEUEa5bE=
|
||||
github.com/golang-migrate/migrate/v4 v4.16.2 h1:8coYbMKUyInrFk1lfGfRovTLAW7PhWp8qQDT2iKfuoA=
|
||||
github.com/golang-migrate/migrate/v4 v4.16.2/go.mod h1:pfcJX4nPHaVdc5nmdCikFBWtm+UBpiZjRNNsyBbp0/o=
|
||||
github.com/golang/protobuf v1.5.0/go.mod h1:FsONVRAS9T7sI+LIUmWTfcYkHO4aIWwzhcaSAoJOfIk=
|
||||
github.com/google/go-cmp v0.5.5/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
|
||||
github.com/google/go-cmp v0.5.9 h1:O2Tfq5qg4qc4AmwVlvv0oLiVAGB7enBSJ2x2DqQFi38=
|
||||
@ -48,6 +64,11 @@ github.com/google/uuid v1.4.0 h1:MtMxsa51/r9yyhkyLsVeVt0B+BGQZzpQiTQ4eHZ8bc4=
|
||||
github.com/google/uuid v1.4.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
|
||||
github.com/gorilla/mux v1.7.4 h1:VuZ8uybHlWmqV03+zRzdwKL4tUnIp1MAQtp1mIFE1bc=
|
||||
github.com/gorilla/mux v1.7.4/go.mod h1:DVbg23sWSpFRCP0SfiEN6jmj59UnW/n46BH5rLB71So=
|
||||
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
|
||||
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
|
||||
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
|
||||
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
|
||||
github.com/jmoiron/sqlx v1.4.0 h1:1PLqN7S1UYp5t4SrVVnt4nUVNemrDAtxlulVe+Qgm3o=
|
||||
github.com/jmoiron/sqlx v1.4.0/go.mod h1:ZrZ7UsYB/weZdl2Bxg6jCRO9c3YHl8r3ahlKmRT4JLY=
|
||||
github.com/joho/godotenv v1.4.0 h1:3l4+N6zfMWnkbPEXKng2o2/MR5mSwTrBih4ZEkkz1lg=
|
||||
@ -64,19 +85,30 @@ github.com/lib/pq v1.10.9 h1:YXG7RB+JIjhP29X+OtkiDnYaXQwpS4JEWq7dtCCRUEw=
|
||||
github.com/lib/pq v1.10.9/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
|
||||
github.com/mattn/go-isatty v0.0.19 h1:JITubQf0MOLdlGRuRq+jtsDlekdYPia9ZFsB8h/APPA=
|
||||
github.com/mattn/go-isatty v0.0.19/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
|
||||
github.com/mattn/go-sqlite3 v1.14.22 h1:2gZY6PC6kBnID23Tichd1K+Z0oS6nE/XwU+Vz/5o4kU=
|
||||
github.com/mattn/go-sqlite3 v1.14.22/go.mod h1:Uh1q+B4BYcTPb+yiD3kU8Ct7aC0hY9fxUwlHK0RXw+Y=
|
||||
github.com/moby/term v0.5.0 h1:xt8Q1nalod/v7BqbG21f8mQPqH+xAaC9C3N3wfWbVP0=
|
||||
github.com/moby/term v0.5.0/go.mod h1:8FzsFHVUBGZdbDsJw/ot+X+d5HLUbvklYLJ9uGfcI3Y=
|
||||
github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg=
|
||||
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
|
||||
github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M=
|
||||
github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk=
|
||||
github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
|
||||
github.com/morikuni/aec v1.0.0/go.mod h1:BbKIizmSmc5MMPqRYbxO4ZU0S0+P200+tUnFx7PXmsc=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.0.2 h1:9yCKha/T5XdGtO0q9Q9a6T5NUCsTn/DrBg0D7ufOcFM=
|
||||
github.com/opencontainers/image-spec v1.0.2/go.mod h1:BtxoFyWECRxE4U/7sNtV5W15zMzWCbyJoFRP3s7yZA0=
|
||||
github.com/pelletier/go-toml/v2 v2.0.8 h1:0ctb6s9mE31h0/lhu+J6OPmVeDxJn+kYnJc2jZR9tGQ=
|
||||
github.com/pelletier/go-toml/v2 v2.0.8/go.mod h1:vuYfssBdrU2XDZ9bYydBu6t+6a6PYNcZljzZR9VXg+4=
|
||||
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/redis/go-redis/v9 v9.12.1 h1:k5iquqv27aBtnTm2tIkROUDp8JBXhXZIVu1InSgvovg=
|
||||
github.com/redis/go-redis/v9 v9.12.1/go.mod h1:huWgSWd8mW6+m0VPhJjSSQ+d6Nh1VICQ6Q5lHuCH/Iw=
|
||||
github.com/sirupsen/logrus v1.9.2 h1:oxx1eChJGI6Uks2ZC4W1zpLlVgqB8ner4EuQwV4Ik1Y=
|
||||
github.com/sirupsen/logrus v1.9.2/go.mod h1:naHLuLoDiP4jHNo9R0sCBMtWGeIprob74mVsIT4qYEQ=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
@ -93,6 +125,8 @@ github.com/twitchyliquid64/golang-asm v0.15.1 h1:SU5vSMR7hnwNxj24w34ZyCi/FmDZTkS
|
||||
github.com/twitchyliquid64/golang-asm v0.15.1/go.mod h1:a1lVb/DtPvCB8fslRZhAngC2+aY1QWCk3Cedj/Gdt08=
|
||||
github.com/ugorji/go/codec v1.2.11 h1:BMaWp1Bb6fHwEtbplGBGJ498wD+LKlNSl25MjdZY4dU=
|
||||
github.com/ugorji/go/codec v1.2.11/go.mod h1:UNopzCgEMSXjBc6AOMqYvWC1ktqTAfzJZUZgYf6w6lg=
|
||||
go.uber.org/atomic v1.7.0 h1:ADUqmZGgLDDfbSL9ZmPxKTybcoEYHgpYfELNoN+7hsw=
|
||||
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
|
||||
go.uber.org/goleak v1.2.0 h1:xqgm/S+aQvhWFTtR0XK3Jvg7z8kGV8P4X14IzwN3Eqk=
|
||||
go.uber.org/goleak v1.2.0/go.mod h1:XJYK+MuIchqpmGmUSAzotztawfKvYLUIgg7guXrwVUo=
|
||||
go.uber.org/multierr v1.10.0 h1:S0h4aNzvfcFsC3dRF1jLoaov7oRaKqRGC/pUEJ2yvPQ=
|
||||
@ -104,6 +138,8 @@ golang.org/x/arch v0.3.0 h1:02VY4/ZcO/gBOH6PUaoiptASxtXU10jazRCP865E97k=
|
||||
golang.org/x/arch v0.3.0/go.mod h1:5om86z9Hs0C8fWVUuoMHwpExlXzs5Tkyp9hOrfG7pp8=
|
||||
golang.org/x/crypto v0.14.0 h1:wBqGXzWJW6m1XrIKlAH0Hs1JJ7+9KBwnIO8v66Q9cHc=
|
||||
golang.org/x/crypto v0.14.0/go.mod h1:MVFd36DqK4CsrnJYDkBA3VC4m2GkXAM0PvzMCn4JQf4=
|
||||
golang.org/x/mod v0.10.0 h1:lFO9qtOdlre5W1jxS3r/4szv2/6iXxScdzjoBMXNhYk=
|
||||
golang.org/x/mod v0.10.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
|
||||
golang.org/x/net v0.10.0 h1:X2//UzNDwYmtCLn7To6G58Wr6f5ahEAQgKNzv9Y951M=
|
||||
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
|
||||
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
@ -114,6 +150,8 @@ golang.org/x/text v0.13.0 h1:ablQoSUd0tRdKxZewP80B+BaqeKJuVhuRxj/dkrun3k=
|
||||
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
|
||||
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
|
||||
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
|
||||
golang.org/x/tools v0.9.1 h1:8WMNJAz3zrtPmnYC7ISf5dEn3MT0gY7jBJfw27yrrLo=
|
||||
golang.org/x/tools v0.9.1/go.mod h1:owI94Op576fPu3cIGQeHs3joujW/2Oc6MtlxbF5dfNc=
|
||||
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
|
||||
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
|
||||
google.golang.org/protobuf v1.30.0 h1:kPPoIgf3TsEvrm0PFe15JQ+570QVxYzEvvHqChK+cng=
|
||||
@ -8,7 +8,7 @@ import (
|
||||
"github.com/google/uuid"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
)
|
||||
|
||||
// EventType represents the type of audit event
|
||||
@ -12,8 +12,8 @@ import (
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// HeaderValidator provides secure validation of authentication headers
|
||||
@ -10,10 +10,10 @@ import (
|
||||
"github.com/golang-jwt/jwt/v5"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/cache"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/cache"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/domain"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// JWTManager handles JWT token operations
|
||||
@ -14,9 +14,9 @@ import (
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/domain"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// OAuth2Provider represents an OAuth2/OIDC provider
|
||||
@ -9,9 +9,9 @@ import (
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/cache"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/cache"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// PermissionManager handles hierarchical permission management
|
||||
@ -17,9 +17,9 @@ import (
|
||||
"github.com/google/uuid"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/domain"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// SAMLProvider represents a SAML 2.0 identity provider
|
||||
@ -7,8 +7,8 @@ import (
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/domain"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/domain"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// ResourceType represents different types of resources
|
||||
@ -7,8 +7,8 @@ import (
|
||||
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// CacheProvider defines the interface for cache operations
|
||||
@ -7,8 +7,8 @@ import (
|
||||
"github.com/redis/go-redis/v9"
|
||||
"go.uber.org/zap"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/config"
|
||||
"github.com/RyanCopley/skybridge/kms/internal/errors"
|
||||
"github.com/kms/api-key-service/internal/config"
|
||||
"github.com/kms/api-key-service/internal/errors"
|
||||
)
|
||||
|
||||
// RedisCache implements CacheProvider using Redis
|
||||
@ -8,7 +8,7 @@ import (
|
||||
|
||||
_ "github.com/lib/pq"
|
||||
|
||||
"github.com/RyanCopley/skybridge/kms/internal/repository"
|
||||
"github.com/kms/api-key-service/internal/repository"
|
||||
)
|
||||
|
||||
// PostgresProvider implements the DatabaseProvider interface
|
||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user