This episode is not covering Google Maps as such, but another Google product: Crowdsource. To explain what Crowdsource is all about, and why you as local guide might find it very interesting, I had an interview with Crowdsource Product Manager Anurag Batra.
- Email: firstname.lastname@example.org
- Reach out to me on Twitter: LocalGuidesGuru
- My personal blog: janvanhaver.com/
- The local guides section on my blog
- Crowdsource app on Play Store and on the web
- Crowdsource Facebook page
- Crowdsource community team email for local guides wanting to host local chapters: email@example.com
- What a Great Idea: Batch Edit Mode
- The main post on the Local Guides Clean The Map project (#LGCTM) on Connect
- The #LGCTM Logo Challenge – DEADLINE: 31/01/2020
- Sign up to become a local guide yourself
- Local Guides Connect – the official forum for Local Guides
Jan Van Haver 0:06
Hello and welcome to yet another episode of the LetsGuide Podcast, the ultimate podcast for Google Local Guides. You’re listening to episode number 18. This episode is in fact not covering Google Maps as such, but another Google product named Crowdsource. To explain what Crowdsource is all about, and why you as local guides might find it very interesting, I had an interview with Crowdsource Product Manager Anurag Batra.
Jan Van Haver 0:37
Before we start the interview, I want to point out that I’m not an official representative of Google, I’m just a local guide. So everything you hear in this podcast or everything I come up with is just my personal interpretation of things. This episode is recorded in the middle of January 2020 and is describing the situation as it is today. Should you be listening to this episode at a later date, things might obviously have changed. So let’s hear what Anurag has to share with us during the interview.
Vanessa P. 1:12
Let’s get started.
Jan Van Haver 1:15
Hello, Anurag, please introduce yourself to the LetsGuide Podcast audience.
Anurag Barta 1:22
Hello Jan and hello local guides. I’m excited to be here and thank you for giving me this opportunity to talk to everybody. My name is Anurag Batra and I am the product manager for Google Crowdsource. I’m based out of California, and I work at the Google headquarters in Mountain View.
Jan Van Haver 1:43
Okay, that’s great – so you’re product manager for Crowdsource. Can you tell in general terms what that is as a product?
Anurag Barta 1:53
Yes. So Crowdsource is a mobile app. And it’s also on the web at crowdsource.google com. it’s used by people to answer very simple questions, things like “Do you see a car in this picture?”, or “Here, this little bit of a handwriting scribble, what does it say in your language?”, or “What is the sentiment behind this sentence in your language?”. So, these are simple questions that people can answer in a matter of three to five seconds. And the benefit of that is that it helps us make Google products diverse for everybody around the world. So the handwriting answers, for example, help us make handwriting recognition on Keyboard better. The imagery answers, help us make products like Photos better. So we want to make sure that Google products work better for everybody around the world and answers from people help us achieve that.
Jan Van Haver 2:55
Okay, so Crowdsource is a product for you that is helping to solve a number of problems then, I get.
Anurag Barta 3:06
That is correct. So the problem that we have is one of diversity of data on which machine learning is based. Now, what does that mean? If you think about most of the magical experiences that we take for granted today – like you can go to Google Photos, and you can search for a photo of pancakes that you ate a year ago by just typing in ‘pancakes’. Or you can talk to the Assistant and you can say “Hey, play me a song by The Beatles” – right? – and it understands your voice; it understands what you’re saying. But behind the scenes, there is a lot of complexity in this. There is a technique called machine learning that is based upon lots and lots of data that actually powers these experiences. Now, the problem there is that most open data available out there on which Google and a lot of other developers out in the world built their machine learning algorithms contain more data from the Western advanced countries. They do not have as much data from a lot of languages and representing a lot of cultures from Asia, and Latin America, and Africa, and Eastern Europe, and so on – right? And we’re trying to solve this problem, we want to make sure that all of these magical experiences work just as well for everybody around the world. So when people are searching photos, they can search not just for pictures of pancakes but also pictures of bharatas or pad thai. When people are talking to the Assistant, the Assistant can understand African English accents, or Indian English accents just as well as American English accents and for that matter, every language In the world, in every accent in the world.
Jan Van Haver 5:02
Anurag Barta 5:02
So you can have – imagine these are like very large scale problems that we are trying to solve. And we are asking the whole global community to get involved and helping us solve this problem.
Jan Van Haver 5:11
You’re mentioning every language in the world. Do you have a kind of number of how many languages are supported?
Anurag Barta 5:20
Yeah. Well, it depends. So, there is a different number of languages that the Assistant supports – I did not have the numbers off the top of my head. But there’s a different number of languages that the Assistant supports, then there is a different number of languages that the Keyboard supports for handwriting gestures, there is a different number of languages that Translate supports, for example, and so on, right? So we are trying to make sure that we expand these numbers to the tens or the low hundreds, and then beyond to the thousands of languages that are out in the world.
Jan Van Haver 5:52
And contributors – that’s what I have from I know from my own experience – can also contribute in multiple languages if they master more than one language sufficiently.
Anurag Barta 6:03
That is correct.
Jan Van Haver 6:06
So how, or where should I see Crowdsource within Google as a whole? So which other Google products, could you say rely on the input from Crowdsource?
Anurag Barta 6:19
Yeah, so if you use Crowdsource you will see – on the app, as well as on the mobile web – you would see a lot of different tiles, and each of those tiles allow you to answer a different type of question. For example, you can go and transcribe some handwriting samples, or you can offer translations, or you can provide the sentiment in a sentence, and so on. So the handwriting tasks that I talked about, helps make the Keyboard better and so the Keyboard team benefits from that data. The sentiment task feeds answers into machine learning models that are in turn used by the Google Maps and Play Store to better classify reviews that are coming in.
Jan Van Haver 7:05
Okay, it’s interesting for the local guides. They will be glad to hear that the reviews that are written there are also being monitored in this way.
Anurag Barta 7:16
Correct. And so we want to organize the reviews better so that people who are searching for a restaurant or a place to go out can actually find out “What is the good thing about that place? And what is the bad thing about that place?” very quickly. You know, sometimes there are thousands of reviews and rather than having to read all of those reviews, we want to be able to find out “Well, what are people saying good stuff about? What are people criticizing?” And we want to be able to summarize that very quickly from those thousands of reviews, and for that we have to be able to understand it – understand what is positive sentiment and what is negative sentiment. And you know: we want to do this for every language in the world. We want to do this for Korean just as well as we do it for English.
Jan Van Haver 8:00
Yeah, that’s quite interesting. So, there’s a number of different kinds of contributions that Crowdsources can make – you mentioned some of them. I find the Smart Camera, which is a new one, I think, or a relatively new one, quite interesting. Could you expand a bit on that?
Anurag Barta 8:20
Yes, you are now actually talking about one of my favorites. Okay, so, Smart Camera is something that we just introduced a couple of months ago. And it’s a unique task because it has got a mixture of two things: you can actually try out an actual machine learning model, that powers the magic behind some products like Camera and Photos and so on. Now, the model that you’re trying out is one that can detect objects. So, you know how when you are pointing the camera at something, Camera able to actually identify things and then focus on those things, right? Now, that behavior itself is powered by machine learning. And the model that powers that is what we are trying out over here. So what we want to do, is we want to make sure that this model works well for everybody around the world, that it is able to identify a gate or a cup as an object, just as well in India or Indonesia or the Philippines or in Nairobi, as it does in United States or Canada or England. So we want a global population to be able to try it out and point it at just things around them and see how well is it able to just detect something. So detection is just one part and then the next thing it does is it tries to identify that thing. Now the identification piece comes in really handy in products like Google Lens. When you go to a restaurant, we want Lens to be able to actually look at the menu and kind of tell you, well, what are the various kinds of things that are in that menu. Or when you point it at a dish, we want to be able to tell you what dish is there. So, if you’re traveling to a new country, you should be able to use Google Lens to identify various kinds of dishes or various kinds of street food and figure out what are the ingredients and so on, right? So to power those kind of magical experiences, we want that identification of objects to work really well as well. So you get to try both of those things as a raw machine learning model in Crowdsource. And wherever it works or doesn’t work, you can send us feedback by just tapping on that thing. And you can submit that photo and let us know whether it was able to identify that thing right or if it was not able to identify it – what is that thing? – so that we can actually train our models and make it better.
Jan Van Haver 10:54
I have to warn the listeners at this point. This Smart Camera thing is just addictive: you try it, you point your camera at something and “Hey, it’s recognizing it. Okay, what else do I have?” And before you know you’re just running around the house and taking pictures of any object you come across. And I have to say it’s it’s working really nicely, because it’s recognizing more or less anything that you point your camera at. So well done, I would say.
Anurag Barta 11:27
Well, I’m really happy to hear that.
Jan Van Haver 11:28
Yes. And that’s your personal favorite?
Anurag Barta 11:32
So, yes, Smart Camera is one of my personal favorites on the imagery side. And then my other personal favorite on the text side on languages side is Sentiment, because it’s just fascinating to read all of these things that people are saying, and you never think as much about the sentiment – it’s just like your brain just automatically assimilates that. But I am sometimes thinking “Hmm, how would I read this particular sentiment?”. So that gives me an idea of how hard it will be for machines to understand this, and to be able to classify those reviews. And so that gives me an appreciation of what a great computer science problem this is. And so just the nerd in me kind of wakes up and gets very energized by that.
Jan Van Haver 12:18
Yeah. So people can contribute on their mobile devices or on the laptop, just in the browser?
Anurag Barta 12:25
Correct. On mobile, the mobile app by itself is available for Android only for now – we are kind of working on solving it for iOS, but anybody can, on their mobile devices or on their laptops, just open up crowdsource.google.com and help out over there.
Jan Van Haver 12:46
Okay, that’s fairly easy. Something very recognizable for local guides, in the Crowdsource program is the fact that there are points and levels. Can you explain a bit how that is organized and working in Crowdsource.
Anurag Barta 13:05
Yes, so it’s very simple: for now, we count the number of contributions that you have made. And for now, every contribution counts as one contribution. So whether you contribute imagery or you answer a question about image label verification or whether you tell us what a handwriting sample is, everything comes as 1 contribution. As you make more contributions, you will level up and you unlock different kinds of badges. And with certain levels, there are certain kinds of perks. Just like with local guides, these perks focus on community. So for example: people who reach level 5 qualify for our community newsletter, and they can opt in to receive it. And when you reach level 10, we actually invite you to a Hangout with our community team and you get to meet other contributors from around the world and also ask any questions that you might And when you reach level 13, we feature you on our community Facebook page as a top contributor.
Jan Van Haver 14:07
Yes, I’ve seen those. So I know about the level 10 because that’s the level I’m currently at. And I was indeed invited for a Hangout. But then I was seeing all these people on Facebook with this specific Crowdsource thing. So that’s level 13. Okay – one challenge to go for me,
Anurag Barta 14:26
And congratulations on crossing level 10.
Jan Van Haver 14:28
Thank you. Okay, you mentioned that for now, everything is 1 point. Any plans to make a differentiation there? That some things would weigh heavier on the points?
Anurag Barta 14:43
Yes. So that’s something that we are working on designs off. It’ll still be a little while before we roll it out, but we have heard loud and clear from our contributor community that some tasks are markedly more complex than others and will take more time and they would be greatly incentivized to do those more, if they carried more weight.
Jan Van Haver 15:05
Yes, like checking translations and stuff, that can be really time consuming.
Anurag Barta 15:11
Exactly. And also contributing and imagery, it takes time, compared to just kind of verifying a label. So yes, we are working on some early designs for what would be a meaningful points framework so that people are rewarded in proportion to the amount of time and complexity that every task poses.
Jan Van Haver 15:30
Okay. And a while ago, you also introduced a global leaderboard for the various activities. What kind of effect did that have on the community or on the number of contributions?
Anurag Barta 15:46
Yes, that was a very exciting feature to launch. So our goal was to basically just kind of make everybody more aware of the fact that there is a vibrant community out there that’s contributing. And so we are always trying to figure out “Well, how do we bring people together?”. And so the leaderboard was our first step in that direction. So we kind of exposed it by task types, because we found out that everybody has their own pick of which task type do they love. And so we would like for everybody to kind of get an idea that there’s more like me out there who also love this particular task type. And also, of course, you kind of feel good being recognized. So when you have contributed a lot, it’s kind of good to get that status recognition as well. So that’s why we launched the leaderboard for. And we saw a lot of excitement in the community. It had been one of the most requested features from the community for a long time. So we were relieved to kind of finally launch it. And also it was amazing to see how people were sharing it on social media and how they were just like super energized, either by having achieved a certain status within the leaderboard, or just kind of totally energized to achieve a status that was their goal.
Jan Van Haver 17:05
So probably that made a spike both in contributions by existing ones, and perhaps also in new contributors joining in.
Anurag Barta 17:16
That is correct. So whenever we launch a new feature, there’s always kind of like a lot of people who kind of get really curious. And so they open up the app again or go to Crowdsource on the web and they check it out. But leaderboards by itself was one of the major energizes of the community.
Jan Van Haver 17:34
Okay. So you already mentioned that there’s new things coming up. But perhaps there’s still some other plans you want to share for the near future or not so near future?
Anurag Barta 17:48
Oh, yes, definitely. So we’re always working on making enhancements to the product, tuning into the voice of our contributors and seeing what what is it that they want the most. So some of the minor things that we are launching are things like the ability to opensource the data that has been contributed through Smart Camera. So, I don’t know if you’ve noticed, but Image Capture, which is how people normally donate images, has got an ability for people to opt into opensourcing their images, so that they can be used not just by Google, but by every developer and researcher out there. We hadn’t introduced that on the Smart Camera, but we are working on introducing that, so that those images can be opensourced as well. On the larger things: I talked about points, that’s what I’m most excited about. I’m also excited about some investigation that our engineering team is doing into launching the app on iOS. We know that we went to a lot of markets where there were a lot of iOS users, and they were disappointed that we don’t have the app available on their platform.
Jan Van Haver 18:50
Anurag Barta 18:51
So that’s the other one. On our community side. We are looking forward to a summit that we are organizing next week in Singapore? So this is a local guide style invitation-only summit. And so we are inviting some of our most active contributors and local influencers from around the world,
Jan Van Haver 19:15
Now you’re getting local guides interested!
Anurag Barta 19:19
That’s right. And actually, just like you, a lot of our top contributors are also a part of the local guides program. So yeah, we’re looking forward to that summit. It’s happening in Singapore next week. And I’m looking forward to meeting a lot of your fellow local guys over there.
Jan Van Haver 19:38
Okay, that’s great. And is there also a kind of platform like Local Guides Connect for local guides? Or where can people find information about the community or – after the event – probably some some nice footage?
Anurag Barta 19:55
Yes. So we have a very active Facebook page. So, it’s at facebook.com/googlecrowdsource, or you can just kind of go to Facebook and search for Google Crowdsource. And so you would have to kind of find the official page, which most hopefully what would show up top in the search results. Because there are a lot of other local-community-created Facebook pages as well. And so these are like local influencers who have created their own local communities. We have active communities in 30+ countries. And so you would find those local pages. We encourage you to check those out as well. But there is an official page which is run by our community team based out of Bangalore and that’s called ‘Google Crowdsource’. So facebook.com/googlecrowdsource. So that’s where we post most of our activity, whatever the community events are, and also showcasing the top contributors over there and also, we showcase any local-influencer-hosted events out there, and you will see events hosted in all parts of the world, countries in Latin America and Africa and Asia and Europe and so on.
Jan Van Haver 21:07
Okay, that’s fascinating indeed. By now, I think the local guides listening to this episode who had not heard about Crowdsource before, are really dying to know: “how can we sign up to contribute ourselves?”
Anurag Barta 21:24
Yes. So first of all, I would say, check out the Crowdsource app. So if you have an Android device, downloaded from the Play Store, and it’s called crowdsource. So download the app, try it out. Let us know what you think of it. And if you want to get involved more, we would love for you to get in touch with our community. You can do that by reaching out to them via Facebook, or you can write to them an email at firstname.lastname@example.org. So that’s, again, email@example.com. And let them know if you want to organize a local chapter, if you want to become a local influencer, and they would be happy to hear from you. But otherwise, download the app or go to crowdsource.google.com.
Jan Van Haver 22:18
Sounds wonderful. And I’m sure a number of people will try that out right after listening to this episode. Anurag Batra, thank you very much for taking your time to do this interview. And I’m really looking forward to spending some time again on Crowdsource myself.
Anurag Barta 22:38
Jan, thank you so much for having me here. It’s such a delight to be reaching out to all the local guides. It remains one of my favorite programs from Google, and I’m so happy to be here.
Jan Van Haver 22:50
Thank you, and perhaps until a later time.
Anurag Barta 22:53
Jan Van Haver 22:55
And that was it, the very exciting interview with Anurag Batra, product manager for the Google product Crowdsource. Very, very interesting indeed. And with that, it’s time for a special section of the podcast that we have every now and then.
Vanessa P. 23:14
What a great idea.
Jan Van Haver 23:17
In ‘What a great idea’ I’m having a look at one specific submission from Idea Exchange, a very exciting part of Local Guides Connect, the official platform where you can submit suggestions to improve Connect or even improve Google Maps itself, and other local guides can then express their vote by clicking the like button to upvote that idea.
Jan Van Haver 23:43
This time, it’s an idea submitted by a fellow local guide also from Belgium, my home country, it’s the local guide @ndsmyter (sorry, Nicholas, about the English way to pronounce your name, but makes more sense this way, if you ask me, here). The idea is called ‘Batch edit mode’, and of course, I’ll provide a link to it in the shownotes. The idea is that, if you make a lot of edits – and quite a few local guides have been known to do that – you need to make a load of clicks. So, the idea is make it somehow possible to do a lot of edits in a batch process. So you enter in some screen multiple edits, and then click once to submit all changes at in a single click. It’s a very elaborate post and there’s also elaborate commenting on it, so it’s definitely worth reading and worth your vote. Once again: please check the show notes, where I’ll include a link that you can also vote for this idea.
Jan Van Haver 24:51
And that’s all I have for this episode. If you want to reach out to me with comments, remarks, questions: no problem, please do so by sending for example an email to firstname.lastname@example.org You can find me on Twitter, my handle there is localguidesguru or you can reach out to me on Local Guides Connect where you can find me under my regular name Jan Van Haver. And the shownotes can of course be found on the homepage of the podcast: letsguidepodcast.com.
Jan Van Haver 25:28
The next episodes will be out in two weeks already and I think it will be a very exciting one, because the topic will be my newest project – regular visitors of Local Guides Connect already know what I’m talking about – yes, indeed: the Local Guides Clean The Map project. That’s for the next episode, number 19. By the way, should you be listening to this episode before January 31: we still have a Local Guides Clean The Map Logo Challenge. You can submit your proposal for a campaign logo in a separate post on Connect; I’ll make sure to include links to both the main post and the logo challenge post in the shownotes. Of course, I have to warn you, there have been some pretty awesome entries already for the logos. But of course, you’re still free to submit until January 31. For now, I say goodbye and I hope to find you back in the audience in two weeks time.
Transcribed by https://otter.ai