Inhoud blog
  • 1 minute presentation
  • How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation
  • Crowdsourcing and its application in marketing activities
  • "What is the best way to attract people and motivate them to participate in crowdsourcing?”
  • Because only good work counts!
    Zoeken in blog

    Beoordeel dit blog
      Zeer goed
      Goed
      Voldoende
      Nog wat bijwerken
      Nog veel werk aan
     
    Wiki6.6

    05-03-2012
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.1 minute presentation
    Klik op de afbeelding om de link te volgen

    My team members and I read Macrowikinomics and especially the opportunities of crowdsourcing caught our attention.

    Crowdsourcing is the outsourcing of a research task to a large group of people in the form of an open call. The papers we read explain in detail the goals for which crowdsourcing can be used and a lot of different methods for the practical implementation. But we noticed that most of the authors ignored the aspect of motivating people to participate in crowdsourcing. So as a team we decided to focus our presentation and paper on the following research question:

    "What is the best way to attract people and motivate them to participate in crowdsourcing?”

    We want to investigate if people participate in crowdsourcing because of the financial compensation or because of immaterial aspects such as skill variety, the possibility to build up human capital, …


    Sarah Inghelbrecht

    05-03-2012 om 13:00 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    01-03-2012
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation
    A few of us mentioned the fact that a big problem of crowdsourcing is selecting the relevant idea's out of hundreds of options. This is exactly the problem that this paper handels, "how reliable are annotations via crowdsourcing?". For this they make use of statistic methods we saw past semester such as the Kendall and Kolmogorov Smirnov test. 

    Firstly, they investigate how much several sets of expert annotations differ from each other in order to see whether repeated annotation is necessary and if it influences performance ranking in a bench- mark scenario. Secondly, they explore if non-expert annotations are reliable enough to provide ground-truth annotations for a benchmarking campaign.

    They make use of the inter-annotator agreement that describes the degree of consensus and homogenity in judgements amongst annotators to verify the reliability. Therefore they make a difference between expert and non-expert annotators. 

    After carrying out several tests they come to the conclusion that repeated expert annotation of the whole dataset is not necessary, as long as the annotation rules are clearly defined. For the non-experts they come to the same conclusion  that the differences in annotations found by the  do not influence the ranking of different systems in a benchmark scenario.



    Camille Liebaert


    Stefanie Rowak, Stefan Rüger, P.2010,  How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation,MIR '10 Proceedings of the international conference on Multimedia information retrieval


    Bijlagen:
    http://oro.open.ac.uk/25874/1/mir354s-nowak.pdf   

    01-03-2012 om 19:09 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    29-02-2012
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.Crowdsourcing and its application in marketing activities
    Crowdsourcing can be used for almost everything you can imagine. For companies though I think crowdsourcing is most profitable in the marketing area and this is exactly what this paper is about. If you ever wondered how firms are utilizing crowdsourcing for the completion of marketing-related tasks, you are at the right address.
    The article distinguishes 3 aspects of marketing. First of all they discuss the
    product development, how the crowd can participate in it and why it’s so effective. Secondly they talk about the advantages of using the crowd instead of agencies for advertising and promotion as well as the disadvantages (for example‘click fraud’). At last they handle the aspect of marketing research.
    The last part, the conclusion, summarizes the pro’s and the con’s of crowdsourcing. They mention the fact that sometimes a crowd can produce a vast amount of ideas with little irrelevance. But what caught my attention in this paper are the
    ethical issues that crowdsourcing can bring along. I never looked at it from that angle but crowdsourcing is in fact moving the tasks of highly paid employees to much lower-wage workers from outside the firm. Will this affect the position of the employees? What about the crowdsources, is it ethically acceptable that they deliver labor at wages that are much too low? Some have already callen it slavery. I find this a very interesting matter that shows us that to everything there is a downside that is not to be underestimated.


    Camille Liebaert



    Paul Whitla, Crowdsourcing and its application in marketing activities,Contemporary Management Research (2009Volume: 5Issue: 1Pages: 15-28


    Bijlagen:
    http://www.cmr-journal.org/article/viewFile/1145/2641   

    29-02-2012 om 17:16 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    28-02-2012
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen."What is the best way to attract people and motivate them to participate in crowdsourcing?”

    As a team we decided to focus our presentation and paper on the following research question         
    "what is the best way to attract people and motivate them to participate in crowdsourcing?” 

    We want to investigate if people participate in crowdsourcing because of the financial compensation or because of immaterial aspects such as skill variety, the possibility to build up human capital, …

    Furthermore we want to determine which motivation is the best to attract people and to make sure that the results of the crowdsourcing activities are reliable.

    Sarah Inghelbrecht, Mieke De Saedeleer, Camille Liebaert, Yasmine De Wulf

    28-02-2012 om 22:01 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.Because only good work counts!
    Klik op de afbeelding om de link te volgen

    Have you ever doubt whether articles on Wikipedia should be completely correct, even if everyone is able to adjust them?
    This is an actual risk for all types of crowdsourcing projects. Constructors of crowdsourcing platforms have to keep under control this malicious adaptions or contributions. Open source projects like Wikipedia has taken this in account by setting up a system that makes errors or malicious modified articles only temporary. In some cases, it's more complicated than that. When participants have to do routine tasks like on Amazon's Mechanical Turk, cheating is more appealing, because they are paid some wages in proportion to the quantity of their work. It’s difficult to check the quality and the accuracy.

    Therefore platforms implement validation mechanisms, but should that suffice? Two existing methods are been compared. These methods try to detect cheating workers based on the Majority Decision (MD) or based on the Control Group (CG). Both approaches have the same level of significance. Relying on an extensive analysis of the cost model, the Majority Decision approach is best suitable for low paid work; the application of the Control Group is more useful for higher remunerated tasks.
    Better workers save a lot of the cost, even I they are paid slightly more.
     The authors are unmistakable advocates of the validation mechanisms, considering they are a lot more efficient, inexpensive and easy to implement than manual selecting task results.

    In my point of view this was a very valuable article to comprehend more about the efficiency and reliability of crowd sourcing. It’s an indispensable contribution to a possible study of motivating participants of crowdsourcing to continue to do their tasks sincere. 

     

    Mieke De Saedeleer

    Hirth, Tobias Hoßfeld, Phuoc Tran-Gia. 2012. Analyzing costs and accuracy of validation mechanisms for crowdsourcing platforms. Mathematical and Computer Modelling, In Press (Available online 20 January 2012)

    Bijlagen:
    http://www.sciencedirect.com/science/article/pii/S0895717712000076   

    28-02-2012 om 19:08 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    27-02-2012
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.Understanding Science 2.0: Crowdsourcing and Open Innovation in the Scientific Method
    Bücheler, T. & Sieg, JH. 2011. Understanding Science 2.0: Crowdsourcing and Open Innovation in the Scientific Method. Procedia Computer Science, 7, 327–329

    Are you able to enumerate two terms that became buzzwords in business during the last few years? ‘Crowdsourcing’ and ‘Open Innovation’ describe a form of shared knowledge that is enabled by technologies like the internet. The internet connects people and facilitates the collaboration in the innovation process. This approach is now also being observed in science: it’s called ‘Citizen Science’ or ‘Science 2.0’. Crowdsourcing is the outsourcing of a research task to a large group of people in the form of an open call. In an ‘Open Innovation’ model firms use external ideas as well as internal knowledge to improve their technology.

    This research tests the relevance of Crowdsourcing to the scientific method and focuses on these two research questions:
    (1)
    Can basic science use the technologies from Crowdsourcing to become more efficient relative to the money spent while saving the quality?
    (2) Which is the best stimulation for large groups to share ideas and data?
    In the paper the approach of research is largely explained, because the authors are looking for partners from other scientific ranges and are searching further input for their research.

    Pre-tests proved that the scientists supervising the projects were positively surprised by the utility of the results. The data of more profound research are currently being analyzed and if these results are also found to be useful, this may be a revolutionary change for the scientific world!

    Sarah Inghelbrecht

    Bijlagen:
    http://pdn.sciencedirect.com/science?_ob=MiamiImageURL&_cid=280203&_user=794998&_pii=S1877050911005746&_check=y&_origin=article&_zone=toolbar&_coverDate=31-Dec-2011&view=c&originContentFamily=serial&wchp=dGLbVlk-zSkzS&md5=19b4abe6dc823ecae83520bdc3f022d4/   

    27-02-2012 om 22:22 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.Together creating value!

    What captivated me most in macrowikinomics is exact what this article focusses on: what about the payment and employment, about the possibility of everyone gaining profit from crowdsourcing? In short: how creating value in a context of mass collaboration?

    This article really gives a distinct overview based on several examples of organisations.  To start with, the authors distinguish “Cathedral” and “Bazaar” (software) development (introduced by Eric S. Raymond), which stands for commercial based development and open source development respectively. Bazaar development relies on a group of voluntarily developers who set up a base code or platform for other users. An interesting feature of this kind of development is that profitability is not given a lot of attention, although monetary incentives are involved for participation in many crowdsourcing projects.

    Furthermore, the article explains that there are different kinds of collaborative value creation. On the side of organizations, value can consist of benefits other than monetary (e.g. collective knowledge on Wikipedia) relying on volunteers working for free; or value can arise from a business model where participants are paid for their work by client organizations. Besides this, some commercial organizations succeed in gaining profit from free crowdsourcing projects, by providing manuals or technical support for e.g. open source software like Linux. On the side of the participating crowd there are examples of free labor (Wikipedia), paid labor (InnoCentive) and inexpensive human labor (Mechanical Turk).

    To conclude I want to mention that innovative collaboration is very useful for organizations in a way a lot of creative ideas originate and clients only have to pay for qualitative good work, but it can be very difficult to select the best submission in a contest or to prevent malicious insertions… So there’s still a challenge to cope with this difficulties!

    Mieke De Saedeleer

    Sang M. Lee, David L. Olson, Silvana Trimi. 2012. Innovative Collaboration for Value Creation. Organizational Dynamics, 41, 7—12
    Link:
    http://www.sciencedirect.com/science/article/pii/S0090261611000763

    27-02-2012 om 20:16 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.Everybody can help to make Ghent a “smart city”

    Have you ever wondered how ICT can make it even more pleasant to live and study in Ghent?  You may even have some wonderful ideas that can make Ghent a “smart city”. Well, it’s possible to do something with your ideas and to put them into practice.

    A study published in April 2011 was the start of a new project that wants to transform Ghent into a “smart city”. A few students from the University of Ghent and research workers of the Alcatel-Lucent Bell Labs launched in collaboration with the city of Ghent the website: www.mijnideevoorgent.be. This website made it possible for everybody to collaborate and to transfer ideas through the concept of crowdsourcing.  When the first part of the study was finished, they wrote a paper to reveal the first results.

    In the first part of the paper, the authors introduce a few key concepts. The most important aspect of the introduction is the term “smart city”. This concept refers to cities which are trying to be environmental friendly and trying to be more liveable by using smart energy, environments, mobility and smart health, education and living/working. Through the website, the researchers collected a lot of new ideas to make Ghent more liveable. In the paper, they summarized the best ideas for smart engagement apps in the urban context of Ghent. The participants of the study introduced the idea of a generic tool or platform that provides all the information you need on any subject in Ghent, for example culture or mobility.

    This study shows us how crowdsourcing and community engagement can help to make it even more pleasant to live and study in Ghent.  I think that it might be interesting to investigate if there are more examples of crowdsourcing in Belgium.

    Yasmine De Wulf

    Mechant, P., De Marez, L., Claeys, L., Criel J. & Verdegem, P. 2011. Crowdsourcing for smart engagement apps in an urban context: an explorative study. International Association for Media and Communication Research (IAMCR). July 13-17 2011. 12p.

    Link: https://biblio.ugent.be/input/download?func=downloadFile&recordOId=1887102&fileOId=1888940

     

    27-02-2012 om 19:51 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.More than fun and money. Worker Motivation in Crowdsourcing – A study on Mechanical Turk.
    Klik op de afbeelding om de link te volgen

    Money, pastime, task autonomy, social contact, ... Have you ever wondered why people spend time on paid crowdsourcing markets such as Amazon Mechanical Turk? This was the research question of the study which Nicolas Kaufmann, Thimo Schulze and Daniel Veit executed in 2011.

    The term crowdsourcing was defined by Howe in 2006 as “the outsourcing of a function or task traditionally done by a designated agent to an undefined network of labourers carried out by a company or a similar institution using a type of “open call”.” Nowadays they use this term to refer to phenomena such as open innovation, co-creation and knowledge aggregation.

    The study executed by the authors of this paper focused on the paid crowdsourcing market Amazon Mechanical Turk. They wanted to determine the motivation of the people who are active on this crowdsourcing platform. T
    herefore they built a special model that combines different existing motivation models like work motivation and education theory and the Self-Determination Theory.  They called this model: “the Worker’s Motivation in Crowdsourcing model”.

    The authors posted a task on Mechanical Turk to collect their data. The task called “Scientific survey about Mechanical Turk usage”. The task took 10 to 15 minutes and they paid the respondents $0.30.  The results of the survey showed that a lot of people are motivated by the payment, this is surprising because the overall wage level on the platform is only $1.38/h. Furthermore the pastime motivation is only important for occasional workers. The power workers (the people who spend the most time on the platform) have more intrinsic motivation such as skill variety and task autonomy.  

    Yasmine De Wulf

    Kaufmann, N., Schulze, T.  & Veit D. 2011. More than fun and money. Worker Motivation in Crowdsourcing – A study on Mechanical Turk. Proceedings of the seventeenth Americas Conference on Information Systems, Detroit, Michigan, August 4th-7th 2011. 11p. 

    Link: http://schader.bwl.uni-mannheim.de/fileadmin/files/publikationen/Kaufmann_Schulze_Veit_2011_-_More_than_fun_and_money_Worker_motivation_in_Crowdsourcing_-_A_Study_on_Mechanical_Turk_AMCIS_2011.pdf

     

     

    27-02-2012 om 19:49 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    24-02-2012
    Klik hier om een link te hebben waarmee u dit artikel later terug kunt lezen.Innovation with Living Labs: a Conceptualization of Learning from User Experiences
    Klik op de afbeelding om de link te volgen

    Mahr, D. & Schuurman, D. 2011. Innovation with Living Labs: a Conceptualization of Learning from User Experiences. European Marketing Academy, 40th, Proceedings, 6

     

    The lack of correspondence between customer needs and product characteristics represents a key reason for failed innovations. This study researches customer involvement in innovation processes by the use of Living Labs. It focuses on two main research questions:

    (1) What defines a Living Lab and how does it foster knowledge creation?

    (2) What is the impact of customer characteristics on knowledge created through Living Labs?

     

    The concept of Living Labs is a process where firms observe customers in their own real-life setting when they develop solutions to new unprecedented problems and discover new usage possibilities. The firms learn from those use experiences, generalize the findings and modify products for new trials.

    The advantages and disadvantages as well as the characteristics which distinguish Living Labs from other research approaches are explained in the paper.

    Also the type of involved customers determines the success. Lead users dispose of useful knowledge for the innovation process. They detect needs far before other customers and they benefit by obtaining a solution to them. Often the role of so-called ‘defectors’ – unsatisfied users -  is underestimated.

     

    The authors conclude with proposals of future research opportunities like research on the design and use of Living Labs or finding out the importance of the role of Living Labs in the requirement of customers’ latent needs. In my opinion the authors ignore the negative aspect of attracting voluntary participation.  

    Sarah Inghelbrecht

    Bijlagen:
    http://https://biblio.ugent.be/record/1887143   

    24-02-2012 om 17:17 geschreven door Camille  

    0 1 2 3 4 5 - Gemiddelde waardering: 0/5 - (0 Stemmen)
    Tags:Living Labs, Crowdsourcing


    Archief per week
  • 05/03-11/03 2012
  • 27/02-04/03 2012
  • 20/02-26/02 2012

    E-mail mij

    Druk op onderstaande knop om mij te e-mailen.


    Gastenboek

    Druk op onderstaande knop om een berichtje achter te laten in mijn gastenboek


    Blog als favoriet !


    Blog tegen de wet? Klik hier.
    Gratis blog op https://www.bloggen.be - Meer blogs