Cognitive Biases in Performance, Wut?
In this session, we’ll take some time to consider the meta-performance and meta-complexity in our disciplines and our measurement tooling.
For the most of our practices in performance engineering, we focus on the objective measures for the system under test: response times, cpu, disk, memory, network, queue depth. We collect the data. We crunch the numbers. We project the future capacity. We sleep well. But, what if the objective measures were deceptively limiting our understanding of performance? What if those elaborate algorithms had convinced us that we were valuable or right? What if there’s more to the story? In this session, we’ll take some time to consider the meta-performance and meta-complexity in our disciplines and our measurement tooling. We’ll talk about why performance actually matters in the bigger picture. We’ll take a step back and consider everything that impacts your ability to be valued and expand performance beyond the metrics.
Mark Tomlinson is currently at Performance Architect at-large and is also the producer of the popular performance podcast PerfBytes. His testing career began in 1992 with a comprehensive two-year test for a life-critical transportation system, a project which captured his interest for software testing, quality assurance, and test automation. After extended work at Microsoft, Hewlett-Packard and PayPal he has amassed broad experiences with real-world scenario testing of large and complex systems and is regarded as a thought-leader expert in software testing automation with a specific emphasis on performance. Mark also offers assistance to customers adopting modern performance testing and engineering practices.