Roisi Proven’s last episode was one of our most popular, so we had to have her back for a return chat! She updates us on what she’s learned about data ethics and unbiased data sets as Director of Product for Altmetric, and on the horrible story she made up that came true.
Featured Links: Follow Roisi on LinkedIn and Twitter | Altmetric | Roisi’s previous episode ‘The Black Mirror Test’ on The Product Experience | The 6 most common types of bias when working with data
Episode transcript
Lily, when the world is as messed up as it is this week, there’s only one person I want to talk to that, Randy, how are we going to get Tom Hanks on our podcast? Okay, make that one of two people. I mean, he’d be amazingly reassuring. I was actually talking about the other one, you know, one of our most popular guests and the woman behind the black mirror test, the one and only Roshi proven that that does make sense. And Reishi is constantly working on how to make things better. Our chat with her this week did point out some ways that we can all improve things by removing data bias. She also actually explains what you know, data bias is and how to ensure that you have more equitable datasets and actually quite a bit more, so let’s just get right into it.
Lily Smith: The product experience is brought to you by mind the product.
Randy Silver: Every week, we talk to the best product people from around the globe about how we can improve our practice, and build products that people love.
Lily Smith: Because it mind the product.com to catch up on past episodes, and to discover an extensive library of great content and videos, browse for
Randy Silver: free, or become a mind the product member to unlock premium articles, unseen videos, AMA’s roundtables, discounts to our conferences around the world training opportunities.
Lily Smith: My new product also offers free product tank meetups in more than 200 cities, and less probably. Welcome back to the podcast. Rishi, it’s so lovely to speak to you again.
Roisi Proven: Hello, Lily. And, Randy, it’s lovely to be back.
Lily Smith: And we have spoken to you before. So for anyone who really enjoy this episode, they should go and check out the other episode if they haven’t already listened to it. And for anyone who has listened to the previous one, welcome back. And today, we are going to be talking all about kind of data and data bias. But before we get stuck into that, would you please give us a real quick intro into who you are? And also what you’re doing these days?
Roisi Proven: Sure, yes. So it has changed a little bit since last time. I am now director of product at a company called Altmetric, which is part of a larger company called Digital Science. So it’s sort of a collection of life science based companies that do various different things in this sort of research lifecycle. And altmetrics stands for alternative metrics. And it is an alternative way for academic researchers and sort of corporate entities to understand the impact of the research that they are funding or that they are writing. And so when I say alternative metrics, traditionally, that’s been citations, everyone sort of understands the can academic citation space. But we look at Twitter, we look at news, we look at patents and policy documents. And so we sort of track the whole lifecycle of a piece of academic research to help people understand the impact it’s having, and hopefully influence future impact as well.
Randy Silver: That’s really interesting. That sounds really cool. But I’m curious a little bit about how that some of the things that you use at all metric apply. So research is this really rigorous process with you know, standards and peer reviews and things like that. And product people, we just kind of wing it a lot. So, I mean, I hope that’s an accurate description of academic research. Am I right on that? And what kind of things can we learn from them?
Roisi Proven: I would say that, yes, it is an awful lot more rigorous generally than your average common garden startup process. And but at the same time, there’s a little edge of that scrappiness as well, especially if you look at what’s happened over COVID with this sort of rise of preprint servers. So that is prior to the peer review process, people publishing very early stages research. And so there’s this sort of scrappiness that exists in Preprints, that is really exciting in a lot of ways, because, you know, they’re moving fast, but scary in a lot of ways too, because the Move fast and break things off startup world has much bigger implications when you’re talking about medical research, for instance. And so it’s been a big baptism of fire. For me, I didn’t come from an academic background. So sort of seeing the attention to detail that goes into every little thing and the planning that goes into just one piece of research is so intense to me, but there is this sort of edge of academia that is very familiar to me as a product person, the sort of relentless curiosity and the the need to find something new and the excitement when you find it is something that I really resonate with. So it’s been really Interesting experience learning about that world and, and learning about the sort of different elements of it. But I would say that, generally speaking, they are less scrappy than your average startup Product Manager for sure.
Randy Silver: Is there a practice that we should all be emulating? Is there something you know that they do that we need to pick up on?
Roisi Proven: Oh, I think the spirit of collaboration is the thing that we should pick up. And not just collaboration between people that have the same idea, but collaboration on people who like, very strongly disagree with one another, like the willingness to be disagreed with, I think, is something that we don’t hold enough in the corporate world, we’re sort of worried about perception a lot. And don’t tend to disagree as strongly as we would should, maybe in some circumstances. So I think this spirit of debate and the spirit of collaboration is something that startup ecosystems could learn a lot from,
Lily Smith: is that in terms of how they kind of interact with each other? Or is it just a part of the process almost like,
Roisi Proven: I would say that the process invites a lot of eyes on the same piece of work, because you know, it goes through the authorship process. And then those authors will have sort of tenured advisors that contribute, and then it will go through the peer review process. And then as soon as it goes out into the world, that’s just one gigantic, very noisy and rude peer review. And so there’s sort of a lot of attention that goes into the work. So people sort of want to make sure that it stands up to all that attention. And so the collaboration is sort of built really strongly into every stage of the process. And I think more where in startups, you know, we’re used to sort of diverging and converging the double diamond. They’re just in a lie. It’s like collaboration all the way and making sure everyone knows everything. And that can add inertia, yes, but also seems to produce stronger, more robust work at the end.
Lily Smith: So you’ve mentioned that you’ve been doing a lot of work around data bias and making data sets more equitable. But what what do you mean by an equitable data set?
Roisi Proven: So data is a very tricky thing, because Foucault must think about it in a vacuum that it’s this mechanical thing that is completely objective. But datasets inherit the bias of their creators. And so you can’t create a dataset and grow a dataset like we do, where our datasets constantly growing, we have hundreds of millions of tweets. For instance, in our database, I checked the other day, we had 441 million tweets in our database. So you know, our database is pretty big and full and rich. But it is it has baked into all the systemic bias that comes with everyone. So over the past couple of years, we’ve done a lot of work to try and break that bias. So go to places that we wouldn’t normally go, we’ve been working very hard on things like linguistic diversity. So we only covered English Wikipedia, we now cover 13 different languages, and that’s growing every day. For policy sources, we realised that the African continent was almost entirely unrepresented. So we went out and identified a number of important sort of think tanks and NGOs from the African continent, and they’ll cover I think, 18, or 19 Different countries still a long way to go. But that representation in the global south particularly was super important to us. And even sort of things like patents, we were only getting patents from the US and UK patent officers. And now we get them from 18 different offices around the world. So all of those things were contributing not just to less attention from people from those countries, because it’s a bit more nuanced than that it’s more about under representation of certain issues, because the issues that face people in Spanish speaking countries, or Portuguese speaking countries, or Chinese speaking countries are not going to be the same as those English speaking countries. And so a big part of our focus over the last couple of years has been reducing that bias in the Western world. We also look at political diversity. So because of the makeup of our company things we’re skewing in a specific direction. And so we sort of went out and deliberately looked for the opposing direction in sort of policy sources and things to make sure that we weren’t underrepresented or over representing any specific area. And similarly, we went out and looked for policy bodies that focus on indigenous issues and focus on sort of social justice issues. So it’s always about constantly diversifying, but it is hard work. And it is work that we’re going to have to continue to pay very close attention to and that’s the thing With bias in datasets, it’s not like a one and done like you make a dataset that’s unbiased. And then you step back and you go Tada objective dataset, because every time a human input something into it is picking up a little bit of that person. And so you have to be constantly working at it. And I think that’s the bit that I found really rewarding and exciting. Like, it was really amazing to see that change in the types of attention we were getting when we started adding different languages of Wikipedia, like the Turkish Wikipedia, for instance, has only existed for, it’s only been legal in Turkey for like five years, not long at all, but it’s got some 700,000 unique pages. So like the Turkish community of Turkish speaking people, were going in and creating this content that wasn’t just English translations from English, Wikipedia, it was unique content about their culture and their interests and people that are important in the country. And it’s just sort of seeing that evolve has been really interesting. But it’s also quite sobering, because you realise that how hard it is to be fair.
Randy Silver: Okay, so you said a lot of really interesting stuff there, and dig into some some of it. So you talked about how, because of the the nature of the company, you were skewing in one direction had to go the other. And sometimes, you know, I’m going to assume it’s not just high quality content versus stupid content as
Roisi Proven: that simple…
Randy Silver: Well, but we see that sometimes where in media, sometimes it’s like, Oh, we got 9 million people saying climate change is real. But we have to balance that with the one person who says it’s not kind of thing. And so it’s not that simple. There’s a lot of nuance to it, too. How do you know, how do you and you talked about some leading metrics of the different types of attention that you’re getting? How do you measure the actual effect? How do you measure that you’re that you have achieved or are achieving a more balanced data set.
Roisi Proven: So it’s, it’s hard that because that would mean, I really have to define what equitable is. And as someone who is subjected to those privileges and disadvantages, you know, I know what some of my privileges are, but I don’t want all of the bar. So it’s impossible for me as a single human being to see what fair looks like. And in fact, it would be sort of scary if I tried. And I think all that we can do is look for voices that we aren’t seeing in our dataset and try to make sure that they are there. And I don’t mean like the one person who thinks climate change isn’t real, because we are science driven. And if there was a site that was around like the Earth is flat, or climate change is not real, that would be an unscientific place. And so it’s not relevant for us, people are welcome to hold those views. But they don’t have a basis in scientific fact. Whereas we do need to make sure that the spread of voices that we are representing is good, and that the research that’s getting attention makes sense. Like if there’s a specific category of research that’s getting crazy attention, and everything else is like teeny, teeny, tiny. We’re like, okay, is this because of the research? Or is this because of our data. And that’s where we sort of dig in and start looking at kind of how things are breaking down and how many new, like novel pieces of research are added when we add a new attention source. So. So for instance, when we added those African policy sources, for instance, we saw a big increase in the amount of research from the African continent that had never received any attention before suddenly receiving attention, because not everything is spoken about on Twitter, I think that’s the Twitter is definitely our biggest, our biggest source of data, so lots of content. But Twitter in itself is bias. So by adding these policy sources in these languages and data from other areas, we’re able to see new novel research get represented. And so one of the areas where historically, alt metrics and metrics in general have not represented the work well is the humanities. So it’s very broad. But the medicine STEMI subjects tend to get a lot of sort of network attention, the humanities, it comes from different places. And one of the places where we’ve seen a massive uptick in the amount of novel research being pulled in is from policy sources because the amount of sort of sociology and psychology and an even like art research will be quoted then in sort of an OECD policy document. And that might happen two years after it’s published. So with your sort of traditional alerting, you won’t find out about that. But with ours, we will be able to sort of pull that in and seeing that novel research come in and showing the lifecycle of those sort of less conventionally popular pieces of research is really interesting, we do sort of top 100 Every year of the highest attention scores. And last year, we switched from doing highest attention scores to doing the top five from each sort of research area. And it was so different, if we done a regular top 100, it would have been entirely COVID, which the main reason that we moved away from that method, because it’s like, nobody wants to read 100 more papers about COVID right now. So we did this top five instead. And it was really interesting to see the more unusual papers come up, and papers that people maybe wouldn’t have noticed otherwise, had we not sort of surfaced them a little bit. So that’s been really interesting. So it
Lily Smith: sounds like you’re addressing bias in a few ways, by, you know, making sure that you’ve got more sources available, but then also kind of, you know, finding ways to bring attention or kind of bring visibility to some of those more unusual papers or pieces of research. Is that correct? Yeah,
Roisi Proven: yeah, I’d say that. It’s, it’s trying to diversify what we promote, as well as making our datasets themselves more diverse. And I should say, This can’t happen just with one person. We have a data bias Working Group at altmetrics. So we sort of bring people from around the company. And again, try to make sure that group of people is also diverse, because, again, you’re putting the bias of the humans in the data. So it behoves you to have as diverse a group of humans as possible.
Lily Smith: And how does this working group come together? And what’s the sort of mission if you like, I mean, it seems obvious to remove the bias from the data, but like, how do they go about doing that is there like a regular process they follow?
Roisi Proven: So we generally we would meet quarterly, and look usually at a specific area of our data, because if we looked at it kind of as a whole thing, that would be completely overwhelming. So we instead look at it in little chunks. So last year, for instance, we did a kind of assessment of our Twitter attention. And we looked at it purely from a sort of statistical point of view. So is there any statistical bias is there an over representation of a certain area or a certain type of Twitter, or all from the same account? So we looked at the statistical bias first, and then the next step for us will be to sort of look at it from a more subjective i, and Twitter is something that is, is tricky for us, because what we don’t want to do is become moderation on top of Twitter. Because at the end of the day, Twitter can barely moderate itself. So who are we to think that we can do better. So generally, what we do is follow Twitter’s policies as closely as possible. So if they delete a tweet, we delete a tweet, they delete an account, we delete an account. And that will definitely be imperfect. But it is the fairest we can be with the data that we have, I think. So we were looking at statistically. And we’ll also sort of look at it subjectively. And we want to do that with every one of our attention sources. And it’s definitely going to be a long road. But it is something that we as an organisation care really deeply about because we want this data to be fair, and to be useful to people and to enable them to have like better conversations about their work.
Lily Smith: And do you have it as a like a, a goal or an objective for the business? And how do you kind of measure your progress against what you imagined to be the gold standard?
Roisi Proven: I would say that we’re still kind of working out what that looks like. As a data company, it’s sort of something that we feel has to always be there. You know, we should always be striving to be as fair as and as fair and equitable as possible. But I don’t know that there’s necessarily an end to it. For me, it’s more of a process that we have in place and will continuously run to continuously assess the bias because the bias will never go away. Again, it’s like we could make it whatever we decide to define as equitable. We can make it perfectly equitable. And it would be inequitable again the next day because of the volume of data that we get it and so all we can do is be conscious of it, and be as diligent as possible when it comes to addressing those areas and mitigating against them,
Lily Smith: when you’re trying to address a bias, is there a way in which you need to check to see if it’s not introducing new bias, which I think kind of spoke about earlier? But is there you know, do you have to test the data for bias, or
Roisi Proven: I think it’s tricky. Because, again, bias is such an amorphous thing that you can’t really test for it, you kind of just need to, you can obviously look at the data that’s there. So for instance, when we add a new policy source will go and look at the policy documents to make sure that they are citing research in a sensible way, and are making sure that they’re citing research in a sensible way. And that it makes sense from a sort of surface level perspective. And then once we add it, we can sort of then see how it evolves and whether it sort of changes things, but it is difficult to closely monitor. We can do spot checks. And we can do sort of deep dives. I mean, the good thing about hover team is that they are all very curious people as well. So there are people constantly sort of poking at the data and doing different data, visualisations and reports and just digging into what’s there. But it is hard to closely monitor. So it is mostly about identifying the gaps, trying to fill them and keeping an eye on things to make sure that they’re not growing arms and legs and a tail and running off in a direction that we don’t intend. But it is a an inexact science, for sure. And I wouldn’t want to say that we have it solved. even remotely, we are on a constant journey. And the most that we can do is make sure that it is as useful as possible for the biggest number of people. And I think there is a lot of really interesting and unique insights that you can draw from our data that you wouldn’t necessarily get from traditional sort of cetaceans and things like that,
Randy Silver: I was going to yell at you for doing it the world’s worst sales pitch. Well, you’ve rescued at the end, you’re just being humble, that’s okay.
Roisi Proven: I’m genuinely very proud of, of the of the data that we provide and the insights that you can get from it, I love digging into our data sets. I’m such a nerd about it. I’m working on a couple of papers, which is hilarious to me as someone who never went to university. But I’m working on a couple of papers about the Wikipedia data, that because it’s just so interesting. And I love it so much. And the reason that I dig so deeply into this bias stuff is because I care so much about it being as useful as possible for the biggest possible number of people. And so I am really proud of the sort of service that we’re able to provide. And I just want to keep making it better.
Randy Silver: Much better it was. So not all of us work with tracking things for in academics and tracking data this way. But pretty much all of us do work with data and analytics. And I’m curious, you’ve run analytics products before but in a different niche and sounds like it’s kind of on a different scale. Is there something that you’ve learned that applicable more broadly than then the company you’re working in something that other product managers might be able to take away from all this?
Roisi Proven: Yeah, so I think, with the data science as a sort of data science and analytics as a field, there’s that old saying of science, when suitably advanced is indistinguishable from magic. We are really bad for being magicians. So they like analytics, AI, data science, machine learning, there is so much smoke and mirrors and we claim to be able to do things that we are not doing. And so we’ve set the bar so impossibly high for ourselves that people assume that it all just works. And things like bias in datasets, I’ve been banging the same drum, but the reason I bang it so much is because I don’t think enough companies that do this are banging that drum, because we want to claim that we’re capable of making it a cleaner experience than is humanly possible. Like anyone who claims that something is 100% accurate or can be used in like life critical things is exaggerating at best. And I think that we are not responsible enough as a group. We don’t think enough Think deeply enough about the human impact. And without metric that human impact could be relatively small. But in other areas, it’s huge. And when I see organisations like Palantir, for instance, the way that they go about data brokering, and data is a commodity, there is no denying that. But they completely neglect the human element and negatively affect humans. As a result. When Palantir started working with the NHS, they sort of did this big thing about how many data points they had on people. And I’m like, No, I don’t want you didn’t put that much about me. Because who are you giving that data to? Now? I don’t trust you. So I think we are bad at seeing that there is more to it than there is. And we’re not good at acknowledging the risks, and working to mitigate against them, because we’re too busy trying to make it look fancy and shiny and on fallible and fallible. infallible, we’re trying too hard to make it seem infallible, when nothing in data science is infallible, nothing is 100%. Anything ever. There’s always a confidence threshold. And if it’s high enough, cool, but we need to stop pretending that what the computer says is completely right, and completely above reproach, because that’s not true.
Randy Silver: So what you’re saying is that Peter Thiel back to company may not have to have listened to this episode. And
Roisi Proven: I mean, there might be a couple of like software engineers or product managers or something sitting in a corner, dusty corner and Palantir, like listening quietly on like a shortwave radio or something like that. I mean, no, I like, that’s why when I see companies who will, might as well rename the remain nameless, talk about being apolitical, again, real cross real fast. Because the things that underpin the systems, we work in our political, and we can’t make them not political, you’re gonna have to take a stand, and that is coming through, especially in the case of Coinbase right now where they can’t be apolitical, because millions of people’s lives depend on them taking us take ping aside, and they back themselves in a corner. And it’s just silly.
Randy Silver: So okay, so staying in this in this general topic, and maybe that specific story when we were prepping for this episode, you told us that one of the stories you need up for your talk about the black mirror test has since come true. Yeah. Yes. If we didn’t ask you about that, before we let you go.
Roisi Proven: I was legitimately gobsmacked because I made the stories in my original Black Mirror talk they did at product tank in London. I made those stories as ludicrous as possible because I was choosing real companies. I didn’t want anyone for those companies to be like, Hey, we’re not baddies. So I’ve deliberately made them really hyperbolic and stupid. And one of the companies that I picked was a wearable company called IRA. And they made wearable glasses that sort of have a little person in your ear that can see from the glasses and can narrate the person around the world. And the hyperbolic crazy story that I did was that they went out of signal, and they turned off and the blame person was lost. And it was, you know, Oh, no. And then I came across the story in in a sort of science newsy thing. And the title of the story was, their bionic eyes are now obsolete and unsupported. And it was about a group of people who had all trialled this new bite, like wearable, implant bionic implant inside their eye, that gave them vision, again, they were fully blind, and it was able to allow them to see sort of colours and shapes. So it gave them some visibility. And one of the users, the users, one of the humans that had this implanted into their head, was walking to a train. And her world went black again, because without warning, the company had gone bust and they hadn’t notified any of their users that that that had happened. And so slowly, but surely these people’s implants are turning off and it’s still happening now. Some of them have gone blind, some of them are still working. And there’s no warning. She said she heard a beep in her head. And then when it went dark, and how dystopian and terrifying is that that you hear? Like, yeah. And then you’re blind. And it’s like, the fact that I went as a dystopian as I possibly could think of. And it still isn’t quite as bad as what actually happened shows me that that exercise had a purpose and that it should be, should be done as much as possible. Because that’s like, a scary. Yeah, that’s insane.
Randy Silver: I got nothing on there. So, yeah, the world if we do need to really think through the implications of everything, we’re doing graceful degradation of service. Think of worst case scenarios of what happens to the people you’re trying to help if you’re not there to help them anymore.
Roisi Proven: Exactly. It’s, yeah, think about what comes after you think bigger
Lily Smith: reishi, this has been really interesting talking to you again, and touching on those black mirror stories. But before we wrap up, it’d be great to have your kind of top tips for trying to avoid bias in your business. You know, regardless of sort of what the business is, or one top tip, you know, what’s the thing that you would love everyone to take away from this conversation and really think about when they’re looking at their own products and businesses?
Roisi Proven: Yeah, I would just say that, wherever you can, look for people that don’t look like you, don’t be click, you don’t think like you look for people that feel different, because if they feel different, then they’re adding something different to the conversation, and I realised that this talk can feel a bit gloomy, gloomy sometimes, and things are getting pretty dystopian. But I think as long as there are people who are curious and kind with that curiosity, there’s still hope that things will take a better turn. And so I would encourage everyone to be kind and be curious.
Lily Smith: Roshi, thank you so much for joining us on the podcast.
Roisi Proven: Thank you for having me. It was lovely to speak to you again.
Lily Smith: Our hosts are, me, Lily Smith
Randy Silver: and me Randy silver.
Lily Smith: Emily Tate is our producer. And Luke Smith is our editor.
Randy Silver: Our theme music is from humbard baseband power that’s P A you thanks to Ana killer who runs product tank and MTP engage in Hamburg and please based in the band for willingness to use their music, connect with your local product community via product tank or regular free meetups in over 200 cities worldwide.
Lily Smith: If there’s no one near you, you can consider starting one yourself. To find out more go to mind the product.com forward slash product tank.
Randy Silver: Product Tech is a global community of meetups driven by and for product people. We offer expert talks group discussion and a safe environment for product people to come together and share learnings and tips
Comments
Join the community
Sign up for free to share your thoughts