• This week Thierry de Pauw shared a dialogue from a testing conference (most likely, TestCon Europe) saying that conferences are siloed and it is difficult to get into testing conferences with developer-focused topics. It seems that post touched a relevant problem as many interesting thoughts keep coming in the comments. In my opinion, when talking about boundaries for testing and quality, it is hard to draw a line - testing professionals need to build understanding across multiple topics including development processes and practices to enable and to advocate for better experimentation, better testing practices, shorter feedback cycles, and so on.

  • However, listening to TestCon talks last week felt siloed a bit - (not all, but) a lot of materials kept repeating same ideas from 5 or more years ago. Yes, tools got better and the rise of AI is finally here, but talking more about critical thinking and experimentation, there is not much progress, and a lot of speakers once again referred to “lack of user-centricity”, which is true. Interestingly, talking about “user-centricity” and experimentation/testing there, I would say Developer Experience community is doing comparably better job than Testing community on this subject. Therefore, recently I found myself reading more from DX community than testing, and my favorite resources are:
  • On the similar note of “lack of user-centricity” or lack of focus on value/impact to business, this year I was (and still am) actively exploring quality metrics topic as I had a gut feeling we are missing some part of full picture in our company. Early this year Michael Kutz posted an article about Outer, Inner, and Process Quality in Software Development and it gave a nice model how to classify different metrics and spot where we are lacking (in our case, it was Outer ones - “lack of user/business centricity”). However, I found that model difficult to use as it was difficult to communicate meanings behind each class, and there are some overlaps… Later on, Pragmatic Engineer (Gergely Orosz) together with Kent Beck published “Measuring developer productivity? A response to McKinsey” which introduced (at least for me) “Effort/output/outcome/impact” mental model. I found this model very useful not only for clarifying our targets in the projects, but for other contexts as well. For example, for mapping down and communicating how my efforts in more abstract role (Testing Practice Lead) translates into impact for delivery quality in multiple projects, sales, community and innovations. Based on that material and my experience, I composed a separate page - Effort, impact and experimentation in testing.

  • As I have worked in several highly regulated projects recently, the following talk from Charity Majors resonated with some of my recent experiences there. Here she explains how “Compliance and Regulatory Standards Are NOT Incompatible With Modern Development Practices”. And a nice bonus on the side is to find a full playlist of fintech_devcon 2023 talks accessible online.

  • And finally, something to try-out. Remember Wordle game? Some time ago I accidentally found something similar yet more advanced - “Semantle”. There you should also need to guess the word, but this time not by matching letters, but by other words’ similarity scores based on Word2vec neural network model. Be aware - it is catchy and not that simple it may sound.