My thoughts on the R-HTA workshops

Last Friday on Monday, we held our online R-HTA workshop. We will discuss the feedback and agree on ways forward in the consortium Scientific Committee soon, but I thought I’d write down some personal reactions to the very interesting discussions — of course, what come out of all this eventually may well be very different to my synthesis! Still…

  • I thought the two days went very well — I have enjoyed them very much and I think so did the participants. Of course, there was an element of frustration (if not from the audience, I did feel that!) because we had to “mute” all the participants, mainly to avoid “privacy” issues with the recording (which, by the way, will be made available shortly. More on this later…) and also for the sake of managing the large number of attendees. There was an extremely active chat, so we got plenty of comments and interaction, but it was kind of annoying to not be able to listen to the voices of people, who wanted to contribute.
  • On a related note, the second day was a bit better (may be I’m getting used to it. Or may be it was a Friday-evening bias…), but Zoom meetings are really draining… On Friday evening I fell asleep in front of the telly before 9pm…
  • The numbers were quite healthy. We had around 230 registered people on each day — in reality the actual audience was smaller than that, I think peaks of 180 and 115, respectively. That’s unavoidable, I think, especially as the event was free and the world is of course bonkers right now — so I do believe the emails of people who were apologising for having to bail out at the last minute. All in all, I’m very happy with the participation and, while it is frustrating to not be able to see people and physically interact with them (whatever that means… I seem to remember enjoying it…), I thought that the online format was great for reaching out to a much bigger and varied audience. We had participants from America (not just the US, the whole continent!), Africa and East Asia — we did try hard to accommodate different time zones by having one in the (UK time-)morning and the other in the evening. I thought people would pick and choose the one that suited them best, but the first participant on a 9am UK time Monday morning was somebody from Mexico! This made me think that we should do more to involve especially low- and middle-income countries in our consortium — particularly as they may be at a stage where their HTA systems are attempting to organise and so it may be a fruitful collaboration.
  • The quality of the talks was generally very high! I enjoyed all the technical discussions and we’ve also generated a very nice repository containing the examples shown by the speakers. On the other hand, I think that some of the less technical people in the audience may have struggled to follow parts of them — I think there was a very steep increase in the level of technicalities in comparison to previous years. This isn’t bad — I did like it very much; but perhaps this is something we should re-think — may be this will eventually morph into a fully fledged conference with “plenary”-kind of things that are aimed to a very general audience (who may not be super technical yet, or may even never get there — but may benefit in hearing some of these things) and “parallel”-kind of talks that go very deep in the Nerdland…
  • The panel discussion was very interesting. We purposely picked panelists who may be critical of our overly-enthusiastic views. I think we need to acknowledge that, despite our best efforts not to, we may still come across a bit as a militia who is completely sold to a cause and on a mission to kill all other tools and approaches until victory is ours). As we say in our ViH paper, this isn’t really about R — well, it is partly… But we know that the fundamental point is to use the method and tools that are best fit for purpose. And that, sometimes Excel may do the job just fine. And, personally, I agree that models should not be overlycomplicated — and I think that being able to manipulate tools that grant you more flexibility and a wider range of choices does mean that you may slip into “let’s try this complex thing” mode, for the nerd-fun of it. BUT: at the same time, I strongly believe that models shouldn’t be overlysimplistic either — and the reverse argument, I think, can be made. If you are forced or choose to use tools that, on occasions, may reveal sub-optimal, you might preclude yourself the possibility of exploring more advanced methodology. And, more importantly, you prevent yourself from looking for and developing the more advanced/appropriate modelling strategy, because they don’t tie up with the software you intend to use. My mind races to VoI considerations here, but Andy made in fact a very, very good point in highlighting how BUGS for NMA is a fantastic example of how the right tool couples with the right methodology. And I think that our work on survival modelling has been motivated by exactly the same approach — to expand the range of modelling strategies and possibly develop something new, which is specific to our field.
  • Which I think leads to the most important take-home message (well, I didn’t need to take that home, as I already was home…). Which is that to my mind our next steps should go towards the consolidation of the consortium into some kind of “task-force” (although I hate the terminology…) whose job would be to create and establish the “R-HTA-verse”, a collection of packages and tools that can be validated and then made available (without bias — “ours” in thinking they are the only way to do things, but also “theirs” in thinking that the status quo is just as well) to the modellers and practitioners. And that this development should go hand in hand with expanding the methodology. In the panel discussion, Francois made very good points about the barriers that agencies like NICE face when presented with people like us: I think his arguments were sound — and both Liz and Venediktos also raised the good point that, sometimes, there’s a drawback in using R because packages may change and break dependencies etc. I think it’s absolutely true and at times annoying — but there’s another side to this: the fact that the development of packages (and R in general, perhaps) is, by definition, a lot more “progressive” than that of commercial software. This means that yes, we can’t imagine “our” stuff to be bug-free and always working. And there has to be a continuous monitoring of the tools to ensure confidence on the users side. But at the same time, this also means that we can fix and adapt things almost in real-time — and the possibility of storing development or semi-stable versions of packages in GitHub repositories means that if you realise that my package has a problem or doesn’t do things the way you like, I can make this happen very quickly. And that, I think, is a very valuable proposition.
  • Final very good point made by Andy was something along the line that “transparency is in the eye of the beholder”. I think that’s absolutely true, but again the possibility of revising things and making them work very quickly and almost interactively is a very valuable property that moves towards transparency.
  • I have spent way more time than I care to admit without feeling the embarrassement of the person who pretends to be down with the kids while in reality at times just shows how he was born in the last century… But: I did manage to edit the video recording and to create a YouTube channel — I’m kind of happy with that…

I’m sure there’s a million more things that will come back to me and to others (in fact, if you were there and are reading this and you can think of something better I’ve missed, I’d like to hear your thougths…). And I’m aware of the increasing level of confusion in these bullet points. And it’s not even Friday!

comments powered by Disqus

Related