GoTech Demo
Bilingual Guide / 中英對照手冊

GoTech Demo 操作手冊

每個 Demo 的操作說明、畫面元素解讀、專有名詞對照。 面試前 10 分鐘快速掃過,確保每個術語都能口語解釋。

操作方式:進入 Demo 頁面 → 點擊「Start Demo」 → 觀察即時數據變化 → 底部有技術原理說明
01

High Concurrency

高併發處理

Demonstrates how a Go backend handles burst traffic using worker pools, rate limiting, and request deduplication.

展示 Go 後端如何透過 worker pool、速率限制、請求去重來處理突發流量。

What You See on Screen / 畫面說明

Active Goroutines/ 活躍的 goroutine 數量
Green squares light up as goroutines process tasks in the worker pool.
Throughput chart/ 吞吐量即時圖表
Requests processed per second, drawn in real time on canvas.
Rate Limited count/ 被攔截的請求數
When traffic exceeds the limit, excess requests are rejected (HTTP 429).
Singleflight Merged/ Singleflight 合併的請求數
Duplicate requests for the same cache key are merged into one DB call.
Event Log/ 事件日誌
Real-time log showing bursts, rejections, and merges as they happen.

Key Terms / 專有名詞

English中文說明
GoroutineGo 協程Go runtime 管理的輕量執行緒,初始僅 2KB,單機可開數十萬個
Worker Pool工作池限制同時執行的 goroutine 數量,避免無上限生成把下游打爆
Rate Limit速率限制限制單位時間內的請求數,保護系統不被流量衝垮
Singleflight請求合併Go 官方套件,相同 key 的並發請求只執行一次,結果共用
Semaphore信號量控制資源存取數量的同步原語,這裡用來限制連線池大小
Why It Matters

10 萬人同時上班,尖峰 QPS 可達 3 萬。沒有這些保護機制,一波流量就能打掛資料庫。

02

Distributed Monitoring

分散式監控

Live dashboard showing RED metrics, distributed traces, and error budget across 5 microservices. Anomaly injected at ~8s.

五個微服務的即時監控儀表板,約第 8 秒注入異常,觸發告警升級和錯誤預算消耗。

What You See on Screen / 畫面說明

Service Health table/ 服務健康度表格
Each row = one microservice. Columns: request rate, error %, p50/p95/p99 latency, CPU, memory.
Status dots/ 狀態燈號
Green = healthy, yellow = warning, red with pulse = degraded.
Trace Waterfall/ 追蹤瀑布圖
Shows how one API request flows through 4 services, with timing bars.
Error Budget gauge/ 錯誤預算進度條
SLO 99.9% = 43 min/month downtime. Bar fills as budget is consumed.
Active Alerts/ 活躍告警
P1 (red) = page someone, P2 (yellow) = Slack notification.

Key Terms / 專有名詞

English中文說明
RED MethodRED 方法Rate (請求率) / Errors (錯誤率) / Duration (延遲),API 服務的三個核心指標
SLO服務等級目標例如 99.9% 可用性,對應每月允許 43 分鐘故障
Error Budget錯誤預算SLO 允許的故障時間額度,用完就凍結部署、全力修穩定性
Distributed Trace分散式追蹤一個請求跨多個服務的完整呼叫鏈,用 trace_id 串起來
p95 / p99 Latency尾端延遲95% / 99% 的請求在這個時間內完成,比平均值更能反映真實體驗
OpenTelemetry開放遙測CNCF 標準,統一 metrics/logs/traces 的 API 格式
Why It Matters

系統出問題時,你需要在 5 分鐘內定位根因。沒有可觀測性,只能靠猜。

03

Real-time Collaboration

即時協作編輯

Two editor panes connected to the same WebSocket room. Type in one, see it appear in the other instantly.

兩個編輯器連到同一個 WebSocket 房間,在一邊打字另一邊即時同步顯示。

What You See on Screen / 畫面說明

User A / User B panes/ A/B 雙編輯區
Two independent text areas, both connected to the same Go WebSocket room.
Sync Events counter/ 同步事件計數
Increments every time a message is relayed between editors.
Sync Latency/ 同步延遲
Milliseconds between sending and receiving a keystroke.
Purple pulse dot/ 紫色脈衝點
Indicates active sync connection between the two editors.

Key Terms / 專有名詞

English中文說明
CRDT無衝突複製資料型別多人同時編輯能自動合併、不需中央協調的資料結構
OT操作轉換Google Docs 早期用的協作演算法,比 CRDT 複雜,逐漸被取代
YjsYjs 框架目前最成熟的 JavaScript CRDT 實作,Notion 類服務的首選
WebSocketWebSocket 長連線全雙工通訊協定,server 可主動推送,適合即時協作
TipTapTipTap 編輯器基於 ProseMirror 的 headless 富文字編輯器,支援 Yjs 協作
Why It Matters

Notion 類服務的核心技術挑戰就是即時協作。這個 demo 展示最基礎的同步機制。

04

Database Pool Management

資料庫連線池管理

Visualize PostgreSQL connection pool lifecycle. Traffic spike at ~5s demonstrates pool saturation and query queuing.

視覺化 PostgreSQL 連線池生命週期,約第 5 秒模擬流量突增,展示連線池飽和與查詢排隊。

What You See on Screen / 畫面說明

Connection Pool grid/ 連線池格子圖
20 slots, red = active query, green = idle, grey = unused.
Saturation bar/ 飽和度進度條
Green < 70%, yellow 70-90%, red > 90%. Red = danger zone.
Connection History chart/ 連線歷史圖表
Active (red line) vs idle (green line) over time. Dashed red = max limit.
pg_stat_statements table/ 慢查詢排行榜
Top queries by duration. Red highlight = query taking > 500ms.
Waiting count/ 等待中的查詢
Queries blocked because all connections are in use.

Key Terms / 專有名詞

English中文說明
Connection Pool連線池預先建立的 DB 連線集合,避免每次查詢都重新建連線
SetMaxOpenConns最大開啟連線數Go database/sql 的設定,必須跟 PostgreSQL max_connections 對齊
pg_stat_statementsPG 查詢統計PostgreSQL 內建的查詢效能追蹤,記錄每個 SQL 的執行次數和耗時
Saturation飽和所有連線都在用時,新查詢只能排隊等,是延遲飆升的頭號原因
PITR時間點回復Point-in-Time Recovery,基於 WAL 日誌回復到任意時間點
Why It Matters

資料庫是整個系統最脆弱的瓶頸。連線池設錯,一波流量就能讓所有 API 卡死。

05

Kubernetes Operations

K8s 叢集維運

Simulates a 3-node K8s cluster lifecycle: HPA autoscaling on traffic spike, rolling update from v1 to v2, OOMKilled recovery, and scale-down after load drops.

模擬三節點 K8s 叢集生命週期:流量突增時 HPA 自動擴容、v1 到 v2 滾動更新、OOMKilled 自動回復、負載下降後縮容。

What You See on Screen / 畫面說明

Phase banner/ 階段橫幅
Shows current phase: Steady State -> HPA Scaling Up -> Rolling Update -> OOMKilled -> Scaling Down.
Node cards with pods/ 節點卡片與 Pod
Three nodes showing CPU/memory bars and pod boxes inside. Pods appear/disappear as HPA scales.
Pod colors/ Pod 顏色
Teal = v1 running, Blue = v2 running, Yellow = creating, Red pulsing = OOMKilled.
HPA gauge/ HPA 儀表
8 replica slots, green = active, yellow pulsing = desired (scaling towards). CPU and QPS targets shown below.
Rolling update progress/ 滾動更新進度條
Blue progress bar showing v1 -> v2 transition percentage.
Cluster events/ 叢集事件
Real K8s-style events: SuccessfulRescale, ScalingReplicaSet, OOMKilled, Scheduled.

Key Terms / 專有名詞

English中文說明
HPA水平自動伸縮器Horizontal Pod Autoscaler,依 CPU 或自訂指標自動調整 Pod 副本數
Rolling Update滾動更新逐一替換舊 Pod 為新版本,maxUnavailable + maxSurge 控制速度
PDBPod 中斷預算PodDisruptionBudget,保證滾動更新或節點維護時最少多少 Pod 可用
OOMKilled記憶體溢出終止容器超過 memory limit 被 kernel 殺掉,K8s 自動重啟
Bin-packing裝箱排程K8s scheduler 把 Pod 塞進 Node 的打包策略,好的排程 = 高利用率
Day-2 OperationsDay-2 維運叢集建完(Day-0)部署完(Day-1)後的持續性工作:升級、調校、事故回應
KustomizeKustomize 設定管理K8s 原生的設定管理工具,用 overlay 管理環境差異(staging/production)
Argo CDArgo CD GitOpsGitOps 部署工具,監聽 Git 變更自動同步到叢集
Why It Matters

小型 platform 團隊要把 K8s 維運盡量自動化. HPA 自動伸縮、GitOps 自動部署、告警自動通知,人只處理機器無法判斷的例外.

06

High-Concurrency IM Architecture

高併發 IM 架構

Pure-frontend simulation of a 6-layer WebSocket + Kafka + Redis + Postgres IM system at 10K–1M user scale. Risk cycle cycles through Redis failure, WS disconnect, Kafka lag, and DB pressure every 60s so every live metric breathes.

純前端模擬企業級 IM 六層架構 (WS + Kafka + Redis + Postgres),規模 10K–1M 使用者。每 60 秒循環 Redis 斷線、WS 重連、Kafka lag、DB 壓力四種風險情境,所有即時指標跟著連動。

What You See on Screen / 畫面說明

Scale slider (10K–1M)/ 規模滑桿
Drives partitions, consumers, WS instances, Redis memory together — every section updates in lockstep.
Six-Layer Architecture/ 六層架構
Client -> Gateway -> WS -> Kafka -> Consumer -> Storage. Each row shows its capacity value + unit @ scale.
Kafka Partition Distribution/ Kafka 分區分流
Bar chart of per-partition load over last 20 ticks. Each bar has a Cd{i} owner badge pointing to its delivery consumer.
Delivery + Persistence groups/ 雙 consumer group
Two Kafka consumer groups share partitions: delivery group pushes via Redis Pub/Sub, persistence group batches to Postgres. Independent HPA behavior.
Message Lifecycle (7 stops)/ 訊息生命週期
Each stop shows current in-flight msg count at that stage; brightness scales with count. Σ = total in-flight != worker pod count (different dimensions).
Fast / Slow path/ 快慢雙路徑
Fast = Redis Pub/Sub + WS deliver (~sub-200ms). Slow = batched Postgres write for history.
Hot / Cold storage/ 冷熱分流
Redis holds last 7 days (hot); Postgres + S3/Data Lake absorbs older archive (cold). Archive pipeline animates every 80 ticks.
Observability (4 signals)/ 可觀測性
Four sparklines driven by live state history: msg/s, pod count, Kafka lag, end-to-end p50. 24-tick rolling window.
Risks & Countermeasures/ 風險與對策
Four failure scenarios cycled every 60s. The active card glows and its matching countermeasure banner appears.

Key Terms / 專有名詞

English中文說明
WebSocket長連線協定全雙工 TCP 之上的持久連線,client 跟 server 可任一側主動送訊息,低延遲;IM 系統的連線協定標配
Kafka PartitionKafka 分區同一 topic 切成多個有序 log,partition key 決定訊息落哪一格;同 key 永遠同 partition,保證順序
Consumer Group消費者群組每個 group 獨立 offset,同一 partition 在一個 group 內只給一個 consumer 讀;IM 常用 delivery + persistence 兩個 group fan-out
Redis Pub/SubRedis 發布訂閱跨 WS pod 廣播訊息的通道。Pod A 收到訊息 -> 發 pub/sub -> 其他 pod 上的 client 收到
UUIDv7UUIDv7 時序 ID時間前綴 + 單調遞增的 UUID 變體,可當主鍵又能依時間排序,取代 Snowflake
Fast Path快速路徑訊息送達接收方的最短路線:Kafka -> Redis Pub/Sub -> WS,目標 < 200ms
Slow Path慢速路徑訊息寫入歷史庫的路線:Consumer batch -> Postgres,用 batch 吸收尖峰
Hot Data熱資料7 天內的訊息放 Redis,讀取 < 1ms
Cold Data冷資料7 天以上的訊息移至 Postgres / S3 / Data Lake,讀取較慢但儲存成本低
Consistent Hashing一致性雜湊Gateway 把 client 路由到固定 WS pod 的演算法,Pod 增減時大部分路由不變
Backpressure反壓機制下游塞住時通知上游減速或拒絕,避免層層堆積爆記憶體
HPA水平自動伸縮器Horizontal Pod Autoscaler;demo 中 kafka-lag 觸發時 delivery group 從 24 擴到 28 pods
Why It Matters

企業級 IM 同時線上 20–40 萬連線、延遲 200ms 以下、訊息零遺失是硬標準。六層各自負責一件事 + Kafka 保序 + 雙 group fan-out 是市面上 Slack / Teams / LINE 都在用的通用模式。

Quick Reference / 快速參考

Architecture Stack / 技術棧

BackendGo 1.22+ / gorilla/websocket / net/http
FrontendNext.js / React / shadcn/ui / Tailwind CSS
Real-timeWebSocket (Go server push at 10Hz)
DatabasePostgreSQL + Redis (production plan)
OrchestrationKubernetes + Argo CD GitOps
MonitoringPrometheus + Grafana + Loki + Tempo (LGTM)

Scale Numbers / 規模數字

Total Users100,000 employees
DAU~60,000 (60%)
Peak QPS~30,000
WebSocket Conns~10,000 concurrent
Data / Year~10 TB (with history)
SLO Target99.9% (43 min/month budget)

Key Decisions / 關鍵技術決策

Modular Monolith first先做模組化單體,不急著拆微服務. 過早微服務會死在維運成本上.
CRDT over OTYjs CRDT 比 OT 簡單、支援離線,是協作編輯的現代選擇.
Kanban over Scrum團隊無法靠加班吸收 sprint overrun 時,Kanban + WIP limit 更彈性.
Trunk-Based Dev短命 feature branch + feature flag,避免分支管理壓垮小團隊.
Self-hosted stack成本可預測性 + 資料主權考量下,Grafana LGTM + MinIO + Vault 覆蓋 observability / 物件儲存 / secrets 三層,不綁特定 vendor.
← Back to Demos|Built with Go + Next.js -- AI-assisted development