Job Summary:
We are seeking highly motivated and skilled Pod Software Engineers to join our System Software team. This team plays a critical role in developing, qualifying, and optimizing high-performance networking solutions for large-scale inference workloads. As a Pod Software Engineer, you will focus on developing and qualifying software that drives communication amongst Sohu inference nodes in multi-rack inference clusters. You will collaborate closely with kernel, platform, and telemetry teams to push the boundaries of peer-to-peer RDMA efficiency.
Key Responsibilities:
High Performance Peer to Peer Networking: Design, develop, and implement RDMA based networking peering, supporting high bandwidth, low latency communication across PCIe nodes within and across racks. Includes work across Operating System, kernel drivers, embedded software and system software.
Test Development: Develop tests that qualify host processors (x86),. NICs, TORs and device network interfaces for high performance.
Burn-in integration: Furnish burn-in teams with tests that represent both real-world use cases and workloads for device to device networking, and extreme-load stress testing.
Performance/Health Telemetry Design: Define the key metrics that system software must collect to maintain high availability and performance under extreme communications workloads.
Representative Projects:
Analyze performance deviations, optimize network stack configurations, and propose kernel tuning parameters for low-latency, high-bandwidth inference workloads.
Design and execute automated qualification tests for RDMA NICs and interconnects across various server configurations.
Identify and root-cause firmware, driver, and hardware issues that impact RDMA performance and reliability.
Collaborate with ODMs and silicon vendors to validate new RDMA features and enhancements.
Implement and validate peer RDMA support for GPU-to-GPU and accelerator-to-accelerator communication.
Modify kernel drivers and user-space libraries to optimize direct memory access between inference pods.
Profile and benchmark inter-node RDMA latency and bandwidth to improve inference job scaling.
Optimize NIC and switch configurations to balance throughput, congestion control, and reliability.
Must-Have Skills and Experience:
Proficiency in C/C++
Proficiency in at least one scripting language (e.g., Python, Bash, Go).
Strong experience with device-to-device networking technologies (RDMA, GPUDirect, etc.), including RoCE.
Experience with zero-copy networking, RDMA verbs and memory registration.
Familiarity with queue pairs, completions queues, and transport types.
Strong understanding of operating systems (Linux preferred) and server hardware architectures.
Ability to analyze complex technical problems and provide effective solutions.
Excellent communication and collaboration skills.
Ability to work independently and as part of a team.
Experience with version control systems (e.g., Git).
Experience with reading and interpreting hardware logs.
Nice-to-Have Skills and Experience:
Experience with networking technologies like NVLink, Infiniband, ML Pod interconnects.
Experience with widely deployed Top of Rack Switches (Cisco, Juniper, Arista, etc.)
Knowledge of server virtualization.
Experience with tracing tools like perf, eBPF, ftrace, etc.
Experience with performance testing and benchmarking tools (gProf, vTune, Wireshark, etc.).
Familiarity with hardware diagnostic tools and techniques
Experience with containerization technologies (e.g., Docker, Kubernetes).
Experience with CI/CD pipelines.
Experience with Rust.
Ideal Background:
Candidates who have worked on GPU or TPU pods, specifically in the networking domain.
Candidates who understand up-time challenges of very big ML deployments.
Candidates who have actively debugged complex network topologies, specifically dealing with cases of node dropouts/failures, route-arounds, and pod resiliency at large.
Candidates must understand performance implications of Pod Networking SW.
Benefits
Full medical, dental, and vision packages, with 100% of premium covered
Housing subsidy of $2,000/month for those living within walking distance of the office
Daily lunch and dinner in our office
Relocation support for those moving to West San Jose
How we’re different
Etched believes in the Bitter Lesson. We think most of the progress in the AI field has come from using more FLOPs to train and run models, and the best way to get more FLOPs is to build model-specific hardware. Larger and larger training runs encourage companies to consolidate around fewer model architectures, which creates a market for single-model ASICs.
We are a fully in-person team in West San Jose, and greatly value engineering skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
If an employer mentions a salary or salary range on their job, we display it as an "Employer Estimate". If a job has no salary data, Rise displays an estimate if available.
At Sohu, we're on the lookout for enthusiastic and talented Pod Software Engineers to join our System Software team in Cupertino! If you have a passion for developing cutting-edge networking solutions for large-scale inference workloads, then this is the opportunity for you. As a Pod Software Engineer, your primary focus will be to create and qualify software that ensures seamless communication among our inference nodes across multi-rack clusters. You'll work closely with various teams including kernel, platform, and telemetry to innovate in peer-to-peer RDMA efficiency. Imagine designing high-performance networking systems that involve low-latency communication while pushing the envelope on what's possible. You'll dive into the intricacies of RDMA based networking, develop crucial tests for high-performance host processors, and design telemetry to maintain optimal performance under extreme loads. This role allows you to analyze performance deviations and collaborate with ODMs and silicon vendors to validate the deployment of new RDMA features. If you're excited about optimizing network interactions between GPUs and accelerators and enjoy solving complex technical puzzles, then the Pod Software Engineer position at Sohu could be your next big adventure. We value creative thinkers who can communicate effectively and collaborate within teams, all while also having the ability to tackle individual challenges. We believe in research and engineering intersection - here's your chance to be a part of something significant!
by burning the transformer architecture into our chips, we’re creating the world’s most powerful servers for transformer inference.
20 jobsSubscribe to Rise newsletter