Skip to content
Home > Blog > Yang Liu: Building a Focused Team, a Common Culture, and a Faster Path to Better Storage

Yang Liu: Building a Focused Team, a Common Culture, and a Faster Path to Better Storage

  • JB Baker 
  • 7 min read

Yang Liu, VP of ASIC Engineering and Co-Founder of ScaleFlux, explains how the company’s success comes from assembling an experienced, execution-driven team and building a culture focused on simplicity, quality, and deep customer collaboration. He emphasizes that smarter, AI-ready storage controllers—not just denser media—are essential to closing the gap between compute and storage.

  1. What pushed you to co-found ScaleFlux, and how did that background shape the product direction and team you built?
  2. How did you turn that diversity into one culture?
  3. What are the biggest lessons learned—from earlier roles and from ScaleFlux’s first generations?
  4. What makes a good partner, and how do you build symbiotic customer relationships?
  5. Looking ahead, what’s misunderstood about storage, and what innovations are needed for the AI infrastructure explosion?

I saw a persistent gap between compute and storage—especially around data efficiency. My SoC background made it obvious: if we could move key data operations closer to where data lives, we could relieve pressure on CPUs and memory. We started with FPGAs to explore ideas quickly, then committed to accelerator ASICs to deliver at scale.

On the team, we deliberately hired senior people who had shipped multiple chips—architecture, design, verification, physical design, packaging, board—each discipline led by someone who had already succeeded at top companies. That experience base mattered. As a startup, we didn’t have time for long learning curves. We knew who to call, what good looks like, and how to stand up a full silicon program with confidence. Over ten years, that mix of expertise—plus the discipline of working through several product generations—hardened our team, our process, and our roadmap execution.

Q2: You pulled in leaders from many companies with different habits and norms. How did you turn that diversity into one culture?

Different resumes don’t have to mean conflicting cultures. We were very clear about our center of gravity: ship the best product, with the best quality and efficiency. That clarity sets behavior. Startups feel different from big companies—fewer safety nets, faster loops, more direct accountability. We made that explicit. Everyone knows our target, everyone sees how their work moves us toward that target, and everyone understands we win only by delivering. That clarity reduces friction because debates anchor to outcomes, not to “the way we did it at Company X.”

We also built trust by being transparent about priorities and constraints. When schedules are tight and resources are lean, you communicate early and often. That helped transform diverse experiences into a single operating rhythm—fast feedback, thoughtful tradeoffs, and a bias to finish.

Two stand out.

First, persistence. Big companies can abandon promising projects when priorities shift. As a startup, if you believe the technology is right, you stay on it, you polish, you improve, and you partner deeply with customers until it lands.

Second, “ease of use” isn’t a nice-to-have; it’s a gating requirement. Our earliest products gained performance via a host driver. Technically sound, but operationally problematic—OS/kernel dependencies, security updates, different customer environments. All things that create complexity for customers and for our support team.  We chose to absorb that complexity into the controller. The result: transparent compression, truly plug-and-play. We didn’t sacrifice customer performance; we took on more engineering complexity to eliminate customer friction. That decision widened adoption and taught us to prioritize operational simplicity alongside raw specs.

Another lesson is about flow ownership. Early on, we collaborated with a large partner for back-end implementation. That was the right call to move fast, but it increased cost and reduced our schedule control. Over subsequent generations, we recruited deeply experienced back-end talent and brought the full SoC flow in-house—from IP to GDS. Today we control the critical path, we contain costs, and we iterate faster. That capability growth—plus a carefully curated in-house IP library we continually refine—lets us deliver more with fewer dollars and fewer people.

Q4: What makes a good partner, and how do you build symbiotic customer relationships?

Trust is earned, not promised. First, we must be prepared—solid team, working silicon, and evidence we can deliver. Then we operate with transparency. No over-promising. We share our constraints and our roadmap tradeoffs, and we invite the same from customers. That’s how misunderstandings are avoided and how products get defined correctly the first time—especially critical in ASIC where a respin costs months and millions of dollars.

We aim for relationships where both sides learn and win. For target customers, we go deep technically, exchange perspectives on workload behavior, and co-design features that matter. When a hyperscaler promotes a capability they get exclusively from us—like transparent compression—that’s the sign the loop is working. It validates our thesis and, equally important, it validates their investment in working closely with a startup.

Over time, we’ve seen that initial “you’re a startup” hesitation replaced by confidence, because success compounds: each on-time, high-quality generation increases trust and lowers perceived risk.

Two misconceptions persist. First, that storage can be treated like a passive commodity while compute and memory get all the attention. In AI, that’s not true. When models and datasets scale, storage controllers become performance gatekeepers. If the controller can’t drive ultra-high IOPS for small I/O patterns or can’t manage data efficiently, GPUs starve and TCO balloons. Second, that “media alone” will fix things. New or denser media helps, but without smarter controllers and software integration, you leave performance and efficiency on the table.

What’s needed? Three things:

  1. Ultra-IOPS controllers purpose-built for AI: We’re pushing controllers to sustain extreme rates with predictable latency for small-I/Os, not just headline sequential numbers. That means architectural choices from queue management to compression pipelines to how we schedule internal bandwidth.
  2. Data efficiency that’s invisible to operators: Compression, placement, and background work must be transparent. No special drivers, no fragile dependencies. The controller should handle and hide all the complexity so customers can adopt at scale.
  3. Media-agnostic readiness: NAND will keep evolving, and new media will emerge. Controllers should be open and adaptable—ready to integrate whatever meets AI’s needs. Our experience and relationships across media vendors help us enable that early.

Underneath all of this is a practical ordering of priorities by market. At the edge, many customers optimize cost/power/performance. In AI centers, it flips: performance first, then power, then cost. Storage has to align to that reality, because in an AI rack the storage share of BOM is small, but its influence on GPU utilization is huge.

Finally, the way we operate matters as much as what we build. We’ve shown we can make production-worthy A-0 silicon, which is critical to hitting time-to-market for our customers’ competitiveness. We stay lean, focus only on storage and memory for data centers and AI, and reuse and harden our IP library instead of reinventing every block. That disciplined focus is why we can deliver with a fraction of the resources—and why customers see us as an agile, efficient partner for the next wave of AI infrastructure.

Any closing thoughts?

We don’t chase breadth; we pursue depth. By concentrating talent, owning the flow, removing customer friction, and co-designing with key users, we’ve built a culture that efficiently delivers generation after generation. That’s how we close the gap between compute and storage, and that’s how we’ll help power what comes next.


In Summary: Build a coherent team to deliver easy-to-use innovations

Align the entire chip development team around experienced leaders and customer needs.

  • Don’t sacrifice usability for cool features
  • Engage deeply with users to build for their needs
  • Focus on specific markets instead of trying to serve everyone in every use case

JB Baker

JB Baker

JB Baker is a successful technology business leader with a 25+ year track record of driving top and bottom line growth through new products for enterprise and data center storage. He joined ScaleFlux in 2018 to lead Product Planning & Marketing as we expand the capabilities of Computational Storage and its adoption in the marketplace.