Skip to content

Trouble Shooting

Troubleshooting: Different Valid Tracing ID are Mingled Together

Last week, users have reported a weird issue because they found:

  • Server has received 10+ same requests to the same API in a single trace, and it's impossible.
  • Server made another RPC call when serving, the egress logging interceptor reports 2 different tracing ID. The first one is set to the context logger after parsing the trace id from client, and the second one is retrieved from the context. They should be the same.
  • The data put inside the context(will be propagated by underlying SDK when RPC) cannot be propagated to the server.

This issue is complicated due to the involved components and the cross services. It takes me a lot of time to troubleshoot and finally I found the root cause successfully. The issue happened because user used gin.Context as context.Context directly inside a spawned go routine, however, gin tries to reuse the gin.Context for another request. As a result, the go routine spawned by previous request will be affected by the new request, and the request id mingles.

gotypesalias=1 does not seem to get used correctly for deps on Go 1.23

Recently, I have help to troubleshooting the Go type alias issue inside packages.Load, which relates x/tools, go list and cmd/compile. My original CL is correct, but I got confused when I found the exportdata contains all aliases regardless gotypealias setting. Then, tim has shepherded my CL by another CL to fix it.

Feel a bit frustrated when I saw the email when I got up. But it helps me a lot to learn how go toolchain and cmd/compile works. Think twice before you act!

golang/tools: reduce jitter of packages.Load

In CL 614975, I have migrated deprecated loader from ssa package. However, miller has reported the tests became extremely slows. In an operation-system with a slow file system, such as plan9, the test slowed by more than 50x.

Oops, a un-intentional change:( I have submitted a CL to load 300+ packages by packages.Load once instead of loading them many. Alan thought it's a bravo change:)

Moreover, when I involved to investigate why the packages.Load slows so much, I have learned a lot. This blog records how I investigate them.

Go: log.Fatalf is Bad for Framework Maintainers

In the open source framework, as long as maintainers can ensure the issues are caused by users instead of the framework, usually they could provide some suggestive guides and prevent themselves from delving into users code.

However, as a company scope framework maintainer, things differ greatly. As you claimed your framework helps them to write better code based on the company platforms, they're likely to rely on the maintainers to figure out some weird issues once it's related with the framework.

This blog records how I help to trouble shoot the weird issue of keeping restarting and failed to deploying caused by the log.Fatalf for our framework users, and my opinion why the log.Fatalf is bad.

Migration from Docker to OrbStack

Recently, my company abandoned docker desktop because it requires the company to paid for its services. As a result, the company treats as an unnecessary cost and asks the developers to find another alternatives. This blog records the troublesome during my migration.