Twenty-four questions

We drew attention to twenty-four ePortfolio questions in this unit. There were six questions for each of the themes:

Theme 1 (Introductions)

Q1. What are your thoughts about a non-linear course in which you are the driver of your own learning pathway?

Q2. How might we help you to connect with others on the course?

Q3. Do you have any resources you would like to recommend for inclusion in the course?

Q4. Is there a difference between Informatik and Informatics?

Q5. What distinguishes Sport Analytics from Sport Informatics?

Q6. What is the relationship between Informatik, Informatics, Analytics and Performance Analysis?

Theme 2 (Pattern recognition)

Q 7. What is systematic about ‘systematic’ observation?

Q 8. Do we need to concern ourselves about the reliability and validity of data?

Q 9. Why is it important to de-identify performance data?

Q10. What did you discover in the shared dataset?

Q11. What have you learned about supervised learning approaches?

Q12. What are your thoughts about how we relate patterns of performance to moments of performance within games?

Theme 3 (Performance monitoring)

Q13. What aspects of this topic are of particular interest to you?

Q14. What criteria would you use to decide which wearable technologies to use?

Q15. Were there any aspects of the work of universities, centres, and sport you found informative?

Q16. Have you used any video tracking technology or video monitoring data?

Q17. Do you have any concerns about the validity and reliability of data gathered from wearable technologies or video tracking?

Q18. Are there any ethical issues involved in monitoring data collected in the ways discussed in this theme?

Theme 4 (Audiences and messages)

Q19. Is ‘augmented information’ a helpful description of the ways you share information?

Q20. Does ‘feedforward’ have any place in your work?

Q21. Do you have any experience of using infographics?

Q22. Are there any visualisation approaches that you recommend?

Q23. Is the concept of a personal learning environment helpful in your practice?

Q24. Can you visualise your personal learning environment?


Audiences and messages

The fourth theme in this course considered the ways in which messages are shared with a range of audiences. We:

  • Discussed augmented information in general and feedforward in particular.
  • Explored the visualisation of data.
  • Invited you to think about personal learning environments, and how you might use an ePortfolio to share your formative and summative reflections about the course and your own learning.

Our discussions about the visualisation of data included this quote from Maria Popova[1] about “the intersection of art and algorithm”:

Ultimately, data visualization is more than complex software or the prettying up of spreadsheets. It’s not innovation for the sake of innovation. It’s about the most ancient of social rituals: storytelling. It’s about telling the story locked in the data differently, more engagingly, in a way that draws us in, makes our eyes open a little wider and our jaw drop ever so slightly. And as we process it, it can sometimes change our perspective altogether.

We noted the importance of storytelling in the sharing of sport analytics data. We included feedforward in our discussions and included a topic to explore the conceptual and practical implications of ‘learning forward’[2].

References

  1. Popova, Maria (12 August 2009). “Data Visualization: Stories for the Information Age”. http://www.businessweek.com/innovate/content/aug2009/id20090811_137179.htm. Retrieved 27 February 2016.
  2. Couros, George (23 February 2016). http://connectedprincipals.com/archives/12323. Retrieved 29 February 2016.


Monitoring performance

The third theme in this course was Performance Monitoring. In this theme we:

  • Presented some background information about performance monitoring.
  • Discussed the development of wearable technologies to monitor performance.
  • Explored some examples of motion and video tracking technologies.

We took the opportunity to look at the quantification of personal performance (the quantified self). This gave us the opportunity to consider some of the ethical issues involved in monitoring and surveillance including a discussion about anonymity and confidentiality of data.


Pattern recognition

In the Pattern Recognition theme (T2), we considered:

  • Systematic observation of performance.
  • Supervised learning approaches to data analysis.
  • Connections between performance trends and athlete actions.
  • Use of open source tools such as R to analyse performance and visualise data.

We shared GPS data from an Australian Rules Football team’s performance in a competitive game in order to explore the potential of such data to provide interesting and actionable insights. We presented Mladen Jovanović’s [1] analysis of the data set as a case study.

This theme included a topic on knowledge discovery in databases (KDD) and used two examples from the sport literature to explore the practice of KDD.

Reference

  1. Jovanović, Mladen (13 March 2015). “AFL Data Analysis Report”. http://complementarytraining.net/wp-content/uploads/2015/03/AFL_Analysis.html. Retrieved 26 March 2016.


Connecting learners with open resources

This is an OERu course that has been designed to contribute to the community service and outreach mission of the OERu.

In the Introductions theme (T1), we:

  • Explored our approach to open sharing.
  • Introduced people, perspectives, products and processes.
  • Drew attention to the Informatik tradition and its links with sport informatics.
  • Discussed the emergence of sport analytics.

The Audiences and Messages (T4) contained within it a discussion of personal learning.

One of our topics within the Introductions theme was Communities of Practice. We mentioned that Etienne Wenger[1][2] has discussed the benefits of communities of practice for learning communities. We shared this quotation:

Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.

We noted too that his definition of communities of practice has three important characteristics:

  • A domain that has an identity defined by a shared interest.
  • A community in which “members engage in joint activities and discussions, help each other, and share information. They build relationships that enable them to learn from each other.
  • A shared practice that is developed through “a shared repertoire of resources: experiences, stories, tools, ways of addressing recurring problems.[3]

One of the issues that arises from your engagement with this cours is how you might become part of a vibrant community interested in and possibly passionate about sport informatics and analytics. Sharing ePortfolio content and alerting others through the use of a # such as #UCSIA16 (see also #UCSIA15) is a good way to do this.

References

  1. Wenger, Etienne (1998). Communities of practice: learning, meaning, and identity. Cambridge: Cambridge University Press.
  2. Wenger, Etienne (June 2006). “Communities of practice a brief introduction”. http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf.
  3. Wenger, Etienne (June 2006). “Communities of practice a brief introduction”. http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf.


Introduction

This page brings together the course content as a capstone page for the Sport Informatics and Analytics course. It an opportunity to reflect on the epistemic culture[1] of the subject field.

The aim of this OERu course has been to explore the intersection of informatics and analytics in sport contexts. We hope you have had an opportunity to visit the four themes that form the scaffold for the course.

By the end of your online activity, we hope you have:

  • Thought about your personal learning journey in a course that is founded on connectivist principles.
  • Engaged in some pattern recognition activities.
  • Explored diverse approaches to performance monitoring.
  • Considered how you might use augmented information with a range of audiences.

In the Introduction to this course, we anticipated that at the completion of this course, you would be able to:

  • Demonstrate disciplined and critical insights into the observation, recording and analysis of performance in sport training and competition environments.
  • Apply knowledge of better practice in sport informatics and analytics to your own sport contexts.
  • Reflect critically on the use of sport informatics and analytics in order to anticipate and develop opportunities to transform your own and others’ performances.

For more information about these learning outcomes see this page.

If you are contemplating the submission of an ePortfolio of your work for credit, we hope you have found sufficient stimulus to reflect on your learning experiences.

Reference

  1. Cestina, Karin. “Culture in global knowledge societies: knowledge cultures and epistemic cultures”, 1999, p.363. Retrieved on 12 January 2016.


Introduction

This page brings together the course content as a capstone page for the Sport Informatics and Analytics course. It an opportunity to reflect on the epistemic culture[1] of the subject field.

The aim of this OERu course has been to explore the intersection of informatics and analytics in sport contexts. We hope you have had an opportunity to visit the four themes that form the scaffold for the course.

By the end of your online activity, we hope you have:

  • Thought about your personal learning journey in a course that is founded on connectivist principles.
  • Engaged in some pattern recognition activities.
  • Explored diverse approaches to performance monitoring.
  • Considered how you might use augmented information with a range of audiences.

In the Introduction to this course, we anticipated that at the completion of this course, you would be able to:

  • Demonstrate disciplined and critical insights into the observation, recording and analysis of performance in sport training and competition environments.
  • Apply knowledge of better practice in sport informatics and analytics to your own sport contexts.
  • Reflect critically on the use of sport informatics and analytics in order to anticipate and develop opportunities to transform your own and others’ performances.

For more information about these learning outcomes see this page.

If you are contemplating the submission of an ePortfolio of your work for credit, we hope you have found sufficient stimulus to reflect on your learning experiences.

Reference

  1. Cestina, Karin. “Culture in global knowledge societies: knowledge cultures and epistemic cultures”, 1999, p.363. Retrieved on 12 January 2016.


Sport examples

We present two examples here for your consideration.

Chris Anderson and David Sally discuss the potential of an analytics approach to association football in their study of The Numbers Game[1].

In the introduction to their book, they write:

The clue to analytics is in the name. To make (those) numbers mean something, to learn something from them, they must be analysed. The key, for those at the vanguard of what some have called a data ‘revolution and what we think of as football’s reformation, is to work out what they need to be counting, and to discover why, exactly, what they are counting counts.[2]

Their book explores the analytics process and raises important empirical and methodological issues for this unit.

activity

A Numbers Game?

Read the introductory chapter in The Numbers Game, Football for Sceptics – The Counter)s) Reformation .
Does their suggestion resonate with your experience of sport?

A storm is gathering in football. It is one that will wash away old certainties and change the game we know and love. It will be a game we view more analytically, more scientifically, where we do not accept what we have always been taught, but where we always ask why. The game will look the same, but the way we think about it will be almost unrecognizable[3].

The second example presented here is the paper written in 1997 by Inderpal Bhandari and his colleagues at the IBM TJ Watson Research Center. The paper is titled Advanced Scout: Data Mining and Knowledge Discovery in NBA Data. In the paper, they report their analysis of data gathered by a software program, Advanced Scout, that “seeks out and discovers interesting patterns in game data”[4]. We have chosen this paper to connect with the spirit of the literature of the time. The editor of the journal within which the paper was accepted was Gregory Piatetsky-Shapiro.

activity

Another coach on the team?

Read the paper here.

  • Can you see any parallels with the generic discussion in Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth’s paper[5]?
  • Mindful of your consideration of the Audiences and Messages theme of this unit, how might you go about sharing insights from the data you have gathered to offer the team another coach as suggested by Bob Salmi in the paper?

References

  1. Anderson, Chris; Sally, David (2013). The numbers game: why everything you know about football is wrong. London: Penguin.
  2. Anderson, Chris; Sally, David (2013). The numbers game: why everything you know about football is wrong. London: Penguin. pp. np.
  3. Anderson, Chris; Sally, David (2013). The numbers game: why everything you know about football is wrong. London: Penguin. pp. np.
  4. Bhandari, Inderpal et al. (1997). [http://www.cse.unr.edu/~sushil/class/ml/papers/local/nba.pdf “Advanced Scout: Data Mining and Knowledge
    Discovery in NBA Data”]. Data Mining and Knowledge Discovery 1 (1): 121. http://www.cse.unr.edu/~sushil/class/ml/papers/local/nba.pdf.
  5. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 37-54. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.


Introduction

This topic explores how we can extract useful information and actionable insights from sport data.

There has been a variety of labels used to characterise processes that extract of useful information from data. These include “data mining, knowledge extraction, information discovery, information harvesting, data archaeology, and data pattern processing”[1].

Gregory Piatetsky-Shapiro [2] introduced the term “knowledge discovery” in a report of a workshop in 1989 that brought together practitioners from “expert systems, machine learning, intelligent databases, knowledge acquisition, case-based reasoning and statistics”[3]. The report of the workshop concluded “knowledge discovery in databases is an idea whose time has come”[4].

William Frawley, Gregory Piatetsky-Shapiro, and Christopher Matheus [5] provided one of the earliest overviews of knowledge discovery in databases in 1992. They defined knowledge discovery in databases (KDD) as:

Knowledge discovery is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. Given a set of facts (data) F, a language L, and some measure of certainty C, we define a pattern as a statement S in L that describes relationships among a subset Fs of F with a certainty c, such that S is simpler (in some sense) than the enumeration of all facts in Fs. A pattern that is interesting (according to a user-imposed interest measure) and certain enough (again according to the user’s criteria)is called knowledge. The output of a program that monitors the set of facts in a database and produces patterns in this sense is discovered knowledge.[6]

They added “Patterns are interesting when they are novel, useful, and non-trivial to compute”[7].

In 1996, Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth discussed “an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases”[8]. Their paper distinguishes KDD from data mining. They note:

In our view, KDD refers to the overall process of discovering useful knowledge from data, and data mining refers to a particular step in this process. Data mining is the application of specific algorithms for extracting patterns from data[9].

They argue that KDD is a process and data mining is a step within that process. The derivation of useful knowledge from data requires:

  • data preparation
  • data selection
  • data cleaning
  • incorporation of appropriate prior knowledge
  • proper interpretation of the results of data mining[10]

Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth provide the conceptual and practical foundation for the the KDD process in sport contexts. They propose:

KDD focuses on the overall process of knowledge discovery from data, including how the data are stored and accessed, how algorithms can be scaled to massive data sets and still run efficiently, how results can be interpreted and visualized, and how the overall man-machine interaction can usefully be modeled and supported[11].

Twenty years after the publication of their paper there is still a tendency to regard data mining and KDD as interchangeable terms. During this unit we have used the term analytics as a shorthand for KDD.

Our discussion of analytics used this definition:

The discovery, communication, and implementation of actionable insights derived from structured information in order to improve the quality of decisions and performance in an organization.

As we develop our KDD skills this activity will include unstructured data too. Whatever is included, it will be part of a process that the literature of the 1990s foresaw.

References

  1. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  2. Piatetsky-Shapiro, Gregory (1990). [https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791 “Knowledge Discovery
    in Real Databases: A Report on the IJCAI-89 Workshop”]. AI Magazine 11 (5): 68-70. https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791.
  3. Piatetsky-Shapiro, Gregory (1990). “Knowledge Discovery in Real Databases: A Report on the IJCAI-89 Workshop”. AI Magazine 11 (5): 68. https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791.
  4. Piatetsky-Shapiro, Gregory (1990). “Knowledge Discovery in Real Databases: A Report on the IJCAI-89 Workshop”. AI Magazine 11 (5): 70. https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791.
  5. Frawley, William; Piatetsky-Shapiro, Gregory; Matheus, Christopher (1992). “Knowledge Discovery in Databases: An Overview”. AI Magazine 13 (3): 57-70. http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1011/929.
  6. Frawley, William; Piatetsky-Shapiro, Gregory; Matheus, Christopher (1992). “Knowledge Discovery in Databases: An Overview”. AI Magazine 13 (3): 58. http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1011/929.
  7. Frawley, William; Piatetsky-Shapiro, Gregory; Matheus, Christopher (1992). “Knowledge Discovery in Databases: An Overview”. AI Magazine 13 (3): 58. http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1011/929.
  8. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 37-54. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  9. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  10. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  11. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39ff. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.


Introduction

This topic explores how we can extract useful information and actionable insights from sport data.

There has been a variety of labels used to characterise processes that extract of useful information from data. These include “data mining, knowledge extraction, information discovery, information harvesting, data archaeology, and data pattern processing”[1].

Gregory Piatetsky-Shapiro [2] introduced the term “knowledge discovery” in a report of a workshop in 1989 that brought together practitioners from “expert systems, machine learning, intelligent databases, knowledge acquisition, case-based reasoning and statistics”[3]. The report of the workshop concluded “knowledge discovery in databases is an idea whose time has come”[4].

William Frawley, Gregory Piatetsky-Shapiro, and Christopher Matheus [5] provided one of the earliest overviews of knowledge discovery in databases in 1992. They defined knowledge discovery in databases (KDD) as:

Knowledge discovery is the nontrivial extraction of implicit, previously unknown, and potentially useful information from data. Given a set of facts (data) F, a language L, and some measure of certainty C, we define a pattern as a statement S in L that describes relationships among a subset Fs of F with a certainty c, such that S is simpler (in some sense) than the enumeration of all facts in Fs. A pattern that is interesting (according to a user-imposed interest measure) and certain enough (again according to the user’s criteria)is called knowledge. The output of a program that monitors the set of facts in a database and produces patterns in this sense is discovered knowledge.[6]

They added “Patterns are interesting when they are novel, useful, and non-trivial to compute”[7].

In 1996, Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth discussed “an overview of this emerging field, clarifying how data mining and knowledge discovery in databases are related both to each other and to related fields, such as machine learning, statistics, and databases”[8]. Their paper distinguishes KDD from data mining. They note:

In our view, KDD refers to the overall process of discovering useful knowledge from data, and data mining refers to a particular step in this process. Data mining is the application of specific algorithms for extracting patterns from data[9].

They argue that KDD is a process and data mining is a step within that process. The derivation of useful knowledge from data requires:

  • data preparation
  • data selection
  • data cleaning
  • incorporation of appropriate prior knowledge
  • proper interpretation of the results of data mining[10]

Usama Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth provide the conceptual and practical foundation for the the KDD process in sport contexts. They propose:

KDD focuses on the overall process of knowledge discovery from data, including how the data are stored and accessed, how algorithms can be scaled to massive data sets and still run efficiently, how results can be interpreted and visualized, and how the overall man-machine interaction can usefully be modeled and supported[11].

Twenty years after the publication of their paper there is still a tendency to regard data mining and KDD as interchangeable terms. During this unit we have used the term analytics as a shorthand for KDD.

Our discussion of analytics used this definition:

The discovery, communication, and implementation of actionable insights derived from structured information in order to improve the quality of decisions and performance in an organization.

As we develop our KDD skills this activity will include unstructured data too. Whatever is included, it will be part of a process that the literature of the 1990s foresaw.

References

  1. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  2. Piatetsky-Shapiro, Gregory (1990). [https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791 “Knowledge Discovery
    in Real Databases: A Report on the IJCAI-89 Workshop”]. AI Magazine 11 (5): 68-70. https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791.
  3. Piatetsky-Shapiro, Gregory (1990). “Knowledge Discovery in Real Databases: A Report on the IJCAI-89 Workshop”. AI Magazine 11 (5): 68. https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791.
  4. Piatetsky-Shapiro, Gregory (1990). “Knowledge Discovery in Real Databases: A Report on the IJCAI-89 Workshop”. AI Magazine 11 (5): 70. https://www.aaai.org/ojs/index.php/aimagazine/article/download/873/791.
  5. Frawley, William; Piatetsky-Shapiro, Gregory; Matheus, Christopher (1992). “Knowledge Discovery in Databases: An Overview”. AI Magazine 13 (3): 57-70. http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1011/929.
  6. Frawley, William; Piatetsky-Shapiro, Gregory; Matheus, Christopher (1992). “Knowledge Discovery in Databases: An Overview”. AI Magazine 13 (3): 58. http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1011/929.
  7. Frawley, William; Piatetsky-Shapiro, Gregory; Matheus, Christopher (1992). “Knowledge Discovery in Databases: An Overview”. AI Magazine 13 (3): 58. http://www.aaai.org/ojs/index.php/aimagazine/article/viewFile/1011/929.
  8. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 37-54. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  9. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  10. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.
  11. Fayyad, Usama; Piatetsky-Shapiro, Gregory; Smyth, Padhraic (1996). “From Data Mining to Knowledge Discovery in Databases”. AI Magazine 17 (3): 39ff. http://www.aaai.org/ojs/index.php/aimagazine/article/download/1230/1131/.


Thriving communities

Bill Johnston[1] has outlined the characteristics of thriving online communities. These include:

  • Shared value (answers, content, connection, expertise and access)
  • Shared identity (to form and shape the community)
  • Vibrant participation (presence, contribution and facilitation)
  • Community leadership (clarity about roles and responsibilities)
  • Quality content (creation, aggregation and curation)
  • Expertise (community learns from leaders within the community)
  • Culture of trust (community is connected and able to share openly)
  • Elegant experience (easy to contribute and participate)
  • Growth and responsiveness (continuing inclusion of members and engagement with the governance of the community)
activity

Thriving communities

If you do look at Bill Johnston’s article, do you think his characteristics can be applied to all communities of practice? Are they limited to online presence? How might stewardship occur in non-online settings?

Reference

  1. Johnston, Bill (20 October 2013). “Attributes of Thriving Online Communities”. http://blog.structure3c.com/2013/10/20/attributes-of-thriving-online-communities/.


Communities of practice in sport

There is a growing interest in the roles communities of practice can play in sport. For example, a University of Ottawa research group has developed a strong interest in these communities.
Diane Culver and Pierre Trudel[1] provided an overview of communities of practice in sport. Their paper stimulated debate about the concept of a community of practice that is summarised here.
Diane Culver and Pierre Trudel joined with Penny Werthner to provide a longitudinal study of a coaches’ community of practice in a youth baseball league.[2]. Rachael Bertram[3] has provided more detail about communities of practice in sport settings in her 2016 Ph.D thesis.

Elsewhere, John Stoszkowski and Dave Collins[4] have reported on the use of online blogs to structure and support informal coach learning. This paper will be of particular interest to anyone who is considering the submission of an ePortfolio for the assessment of this unit. A second paper, co-authored with Cliff Olson offers “insight into student coaches’ perceptions of their use and experiences of structured group blogging for reflection and learning”[5].

Bettina Callary[6] has reported on the creation of a community of practice within a figure skating club. She proposes that her case study:

paves the way to understanding that a CoP can be developed and sustained by coaches when they are in an environment where collaborative coaching and learning is the norm and where coaches entering into the system expect it[7].

portfolio activity

An ePortfolio suggestion

If you have an opportunity to look at some of the literature on communities of practice, can you write a reflection about the potential of such communities to support personal learning?

References

  1. Culver, Diane; Trudel, Pierre (2008). “Clarifying the concept of communities of practice in sport.”. International Journal of Sports Science & Coaching 3 (1): 1-10. https://www.researchgate.net/profile/Diane_Culver/publication/238440467_Clarifying_the_Concept_of_Communities_of_Practice_in_Sport/links/5564c84408aec4b0f48591fe.pdf.
  2. Culver, Diane; Trudel, Pierre; Werthner, Penny (2009). “A Sport Leader’s Attempt to Foster a Coaches’ Community of Practice”. International Journal of Sports Science & Coaching 4 (3): 365-383. http://spo.sagepub.com/content/4/3/365.short.
  3. Bertram, Rachael (2016). Designing, implementing, assessing, and sustaining sport coach communities of practice (Ph.D). University of Ottawa. https://www.ruor.uottawa.ca/bitstream/10393/34282/3/Bertram_Rachael_2016_thesis.pdf.
  4. Stoszkowski, John; Collins, Dave (2015). “Using shared online blogs to structure and support informal coach learning Part 1: A tool to scaffold reflection and communities of practice?”. Sport, Education and Society: 1-24. http://clok.uclan.ac.uk/11666/.
  5. Stoszkowski, John; Collins, Dave; Olson, Cliff (2015). “Using shared online blogs to structure and support informal coach learning. Part 2: The participants’ view and implications for coach education”. Sport, Education and Society: 1-19. http://dx.doi.org/10.1080/13573322.2015.1030382.
  6. Callary, Bettina (2013). “Coaches Create and Sustain a Community of Practice within a club”. Revue phénEPS/PHEnex Journal 4 (3): 1-13. http://ojs.acadiau.ca/index.php/phenex/article/download/1497/1260.
  7. Callary, Bettina (2013). “Coaches Create and Sustain a Community of Practice within a club”. Revue phénEPS/PHEnex Journal 4 (3): 1. http://ojs.acadiau.ca/index.php/phenex/article/download/1497/1260.


Introduction

A theme that pervades this course is that we live in a world of connections. Although we might not always be aware of it, we share interests with others and together we have the opportunity to form a community.

Zygmunt Bauman[1] has explored the characteristics of a community. He suggests:

Words have meanings: some words, however, also have a ‘feel’. The word ‘community’ is one of them. It feels good: whatever the word ‘community’ may mean, it is good ‘to have a community’, ‘to be in a community’[2]

Etienne Wenger[3][4] has discussed the benefits of communities of practice for learning communities. He proposes:

Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.

His definition of communities of practice has three important characteristics:

  • A domain that has an identity defined by a shared interest.
  • A community in which “members engage in joint activities and discussions, help each other, and share information. They build relationships that enable them to learn from each other.
  • A shared practice that is developed through “a shared repertoire of resources: experiences, stories, tools, ways of addressing recurring problems.[5]

Etienne Wenger, Nancy White and John Smith[6] have discussed how such communities might flourish in digital habitats. They identify the role technology stewards can play in such flourishing. Technology stewards are:

people with enough experience of the workings of a community to understand its technology needs, and enough experience with or interests in technology to take leadership in addressing those needs.[7]

References

  1. Bauman, Zygmunt (2001). Community: Seeking Safety in an Insecure World. Cambridge: Polity Press.
  2. Bauman, Zygmunt (2001). Community: Seeking Safety in an Insecure World. Cambridge: Polity Press. p. 1.
  3. Wenger, Etienne (1998). Communities of practice: learning, meaning, and identity. Cambridge: Cambridge University Press.
  4. Wenger, Etienne (June 2006). “Communities of practice a brief introduction”. http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf. Retrieved 29 February 2016.
  5. Wenger, Etienne (June 2006). [http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf “Communities of practice
    a brief introduction”]. http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf. Retrieved 29 February 2016.
  6. Wenger, Etienne; White, Nancy; Smith, John (2009). Digital Habitats: stewarding technology for communities. Portland: CPSquare.
  7. Wenger, Etienne; White, Nancy; Smith, John (2009). Digital Habitats: stewarding technology for communities. Portland: CPSquare. p. 25.


Introduction

A theme that pervades this course is that we live in a world of connections. Although we might not always be aware of it, we share interests with others and together we have the opportunity to form a community.

Zygmunt Bauman[1] has explored the characteristics of a community. He suggests:

Words have meanings: some words, however, also have a ‘feel’. The word ‘community’ is one of them. It feels good: whatever the word ‘community’ may mean, it is good ‘to have a community’, ‘to be in a community’[2]

Etienne Wenger[3][4] has discussed the benefits of communities of practice for learning communities. He proposes:

Communities of practice are groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.

His definition of communities of practice has three important characteristics:

  • A domain that has an identity defined by a shared interest.
  • A community in which “members engage in joint activities and discussions, help each other, and share information. They build relationships that enable them to learn from each other.
  • A shared practice that is developed through “a shared repertoire of resources: experiences, stories, tools, ways of addressing recurring problems.[5]

Etienne Wenger, Nancy White and John Smith[6] have discussed how such communities might flourish in digital habitats. They identify the role technology stewards can play in such flourishing. Technology stewards are:

people with enough experience of the workings of a community to understand its technology needs, and enough experience with or interests in technology to take leadership in addressing those needs.[7]

References

  1. Bauman, Zygmunt (2001). Community: Seeking Safety in an Insecure World. Cambridge: Polity Press.
  2. Bauman, Zygmunt (2001). Community: Seeking Safety in an Insecure World. Cambridge: Polity Press. p. 1.
  3. Wenger, Etienne (1998). Communities of practice: learning, meaning, and identity. Cambridge: Cambridge University Press.
  4. Wenger, Etienne (June 2006). “Communities of practice a brief introduction”. http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf. Retrieved 29 February 2016.
  5. Wenger, Etienne (June 2006). [http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf “Communities of practice
    a brief introduction”]. http://www.linqed.net/media/15868/COPCommunities_of_practiceDefinedEWenger.pdf. Retrieved 29 February 2016.
  6. Wenger, Etienne; White, Nancy; Smith, John (2009). Digital Habitats: stewarding technology for communities. Portland: CPSquare.
  7. Wenger, Etienne; White, Nancy; Smith, John (2009). Digital Habitats: stewarding technology for communities. Portland: CPSquare. p. 25.


Feedforward

In Peter Dowrick’s work, feedforward uses video to model behaviour. His Ph.D research led him to define self-modeling as:

the behavioral change that results from the repeated observation of oneself on videotapes that show only desired target behaviors.[1]

He researched the potential of video self-modeling for four decades. Keith Lyons[2][3] has provided a review of Peter Dowrick’s research.
Feedforward need not be restricted to video self-modeling. Peter Dowrick’s review of self modeling[4] noted:

The most rapid learning by humans can be achieved by mental simulations of future events, based on reconfigured preexisting component skills. These reconsiderations of learning from the future, emphasizing learning from oneself, have coincided with developments in neurocognitive theories of mirror neurons and mental time travel.

This ‘learning from oneself’ does raise important pedagogical issues that can be overlooked if a focus is placed solely on feedback. It continues a discussion started by Richard Schmidt[5] about augmented information.

The use of feedforward on physical education and sport settings has the potential to transform learning environments. Cojanu Florin[6] and Harrison Kingston[7] provide examples of how feedforward might impact on explorations of learning.

References

  1. Dowrick, Peter; Dove, Cynthia (1980). “The use of self-modeling to improve the swimming performance of spina bifida children”. Journal of Applied Behavior Analysis 13 (1): 51-56. http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1308105/pdf/jaba00047-0053.pdf. Retrieved 29 February 2016.
  2. http://keithlyons.wordpress.com/2009/06/28/feedforward/
  3. http://keithlyons.me/blog/2014/10/19/ucsia15-exploring-feedforward-and-mental-time-travel/
  4. Dowrick, Peter (2012). “Self modeling: Expanding the theories of learning”. Psychology in the Schools 49 (1): 30-41.
  5. Schmidt, Richard (1991). “Frequent Augmented Feedback Can Degrade Learning: Evidence and Interpretations”. Tutorials in Motor Neuroscience 62: 59-75.
  6. Florin, Cojanu (2010). “Designing physical education lessons in primary schoolby content type feed-forward”. Journal of Physical Education & Sport 27 (2): 136-140.
  7. Kingston, Harrison (2008). “Examine the effects of feed forward on the proficiency of penalty kicks in football, using a performance analysis framework and a manual notation system, re-testing performance after a tactical intervention strategy”. Cardiff Metropolitan University. https://repository.cardiffmet.ac.uk/dspace/handle/10369/741.


Introduction

This topic has been included in this course to explore how feedforward might offer an alternative way to share messages with audiences.
Peter Dowrick[1] has led the discussion of feedforward (‘learning forward’[2]) in sport settings.

References

  1. Dowrick, Peter (1976). “Self modelling: a videotape training technique for disturbed and disabled children.” (Ph.D). University of Auckland.
  2. Couros, George (23 February 2016). http://connectedprincipals.com/archives/12323. Retrieved 29 February 2016.


Introduction

This topic has been included in this course to explore how feedforward might offer an alternative way to share messages with audiences.
Peter Dowrick[1] has led the discussion of feedforward (‘learning forward’[2]) in sport settings.

References

  1. Dowrick, Peter (1976). “Self modelling: a videotape training technique for disturbed and disabled children.” (Ph.D). University of Auckland.
  2. Couros, George (23 February 2016). http://connectedprincipals.com/archives/12323. Retrieved 29 February 2016.


Exploring visualisations

activity

Examples from sport

As you explore the visualisation options you have, you might like to follow up on some of these links

How might these kinds of approaches inform your work?


Visualisation

The mind map for this topic shares a variety of links to data visualisation and storytelling.

The tools we have to share our data with audiences are available in open formats and commercial products. (An example of the use of an open format was shared in the case study of R in this unit.)

Trina Chiasson, Dyanna Gregory and their colleagues[1] have provided a comprehensive guide to preparing and visualising information. They have shared the source code for their work on Github. They note:

Data come in all different shapes, sizes, and flavors. There’s no one-size-fits-all solution to collecting, understanding, and visualizing information. Some people spend years studying the topic through statistics, mathematics, design, and computer science. And many people want a bit of extra help getting started.

.

The awareness that “no-one-size-fits-all” has led to some fascinating discussions about the aesthetics of visualisation. Stephen Few[2], David McCandless[3], Alberto Cairo[4][5], Gregor Aisch[6] and Giorgia Lupi[7], amongst others, have explored the ways in which we visualise data stories.

There is more information about visualisation discussions here. An updating list of readings in visualisation can be found here.

References

  1. Chiasson, Trina; Gregory, Dyanna (2014). Data + Design. http://orm-atlas2-prod.s3.amazonaws.com/pdf/c8343f6ef7ec9b5380590bab54d6715f.pdf.
  2. Few, Stephen (2013). Information Dashboard Design (2nd ed.). Burlingame: Analytics Press. http://www.amazon.co.uk/dp/1938377001.
  3. McCandless, David. http://www.informationisbeautiful.net/about/. Retrieved 27 February 2016.
  4. Cairo, Alberto (2012). The Functional Art: An introduction to information graphics and visualization. San Francisco: Peachpit. http://www.thefunctionalart.com/p/about-book.html.
  5. Cairo, Alberto (2016). The Truthful Art. San Francisco: Peachpit. http://www.thefunctionalart.com/p/the-truthful-art-book.html.
  6. Aisch, Gregor. http://vis4.net/blog/. Retrieved 27 February 2016.
  7. Lupi, Giorgia. http://giorgialupi.com/work. Retrieved 27 February 2016.


Introduction

At some point in the sport analytics process we share our findings with an audience.

Wolfgang Iser[1] suggested that
when we produce a story to share we should think carefully about how we construct the story and imagine the recipients of the story. He notes that any story has “a network of response-inviting structures” that enable the reader or the listener “to grasp the text”.

The availability of video platforms has extended the reach of such stories.

More recently, Maria Popova[2] has looked at the impact digital platforms have had on the way data are shared. She notes that at “the intersection of art and algorithm”:

Ultimately, data visualization is more than complex software or the prettying up of spreadsheets. It’s not innovation for the sake of innovation. It’s about the most ancient of social rituals: storytelling. It’s about telling the story locked in the data differently, more engagingly, in a way that draws us in, makes our eyes open a little wider and our jaw drop ever so slightly. And as we process it, it can sometimes change our perspective altogether.

References

  1. Iser, Wolfgang (1976). The Implied Reader: Patterns of Communication in Prose Fiction from Bunyan to Beckett. Baltimore: John Hopkins Press.
  2. Popova, Maria (12 August 2009). “Data Visualization: Stories for the Information Age”. http://www.businessweek.com/innovate/content/aug2009/id20090811_137179.htm. Retrieved 27 February 2016.


Introduction

At some point in the sport analytics process we share our findings with an audience.

Wolfgang Iser[1] suggested that
when we produce a story to share we should think carefully about how we construct the story and imagine the recipients of the story. He notes that any story has “a network of response-inviting structures” that enable the reader or the listener “to grasp the text”.

The availability of video platforms has extended the reach of such stories.

More recently, Maria Popova[2] has looked at the impact digital platforms have had on the way data are shared. She notes that at “the intersection of art and algorithm”:

Ultimately, data visualization is more than complex software or the prettying up of spreadsheets. It’s not innovation for the sake of innovation. It’s about the most ancient of social rituals: storytelling. It’s about telling the story locked in the data differently, more engagingly, in a way that draws us in, makes our eyes open a little wider and our jaw drop ever so slightly. And as we process it, it can sometimes change our perspective altogether.

References

  1. Iser, Wolfgang (1976). The Implied Reader: Patterns of Communication in Prose Fiction from Bunyan to Beckett. Baltimore: John Hopkins Press.
  2. Popova, Maria (12 August 2009). “Data Visualization: Stories for the Information Age”. http://www.businessweek.com/innovate/content/aug2009/id20090811_137179.htm. Retrieved 27 February 2016.


A case study

activity

Using R to analyse Australian Rules Football data

In 2015, Mladen Jovanović [1] analysed the data shared in the Pattern Recognition introduction in this unit. He cleaned the data, produced a new .csv file and shared his analysis of the data. He provides a step by step guide that uses a variety of visualisations to bring raw data file to life.

Reference

  1. Jovanović, Mladen (13 March 2015). “AFL Data Analysis Report”. http://complementarytraining.net/wp-content/uploads/2015/03/AFL_Analysis.html. Retrieved 26 March 2016.


Introduction

This topic develops issues raised in Theme 2 of this course Pattern Recognition.

R is a programming language and a software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing.[1]

Kurt Hornik and Friedrich Leisch[2] introduce R in the first edition of the R Newsletter. The R Core Team provide a brief background report about R in the newsletter[3]

You can find a detailed description of R on this Wikipedia page.

References

  1. Hornik, Kurt (November 26, 2015). “R FAQ”. 2.1 What is R?. https://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f. Retrieved 9 February 2016.
  2. Hornik, Kurt and Leisch, Friedrich (1 January, 2001). “Editorial”. https://www.r-project.org/doc/Rnews/Rnews_2001-1.pdf. Retrieved 9 February 2016.
  3. The R Core Team (1 January, 2001). “What is R?”. https://www.r-project.org/doc/Rnews/Rnews_2001-1.pdf. Retrieved 9 February 2016.


Introduction

This topic develops issues raised in Theme 2 of this course Pattern Recognition.

R is a programming language and a software environment for statistical computing and graphics that is supported by the R Foundation for Statistical Computing.[1]

Kurt Hornik and Friedrich Leisch[2] introduce R in the first edition of the R Newsletter. The R Core Team provide a brief background report about R in the newsletter[3]

You can find a detailed description of R on this Wikipedia page.

References

  1. Hornik, Kurt (November 26, 2015). “R FAQ”. 2.1 What is R?. https://cran.r-project.org/doc/FAQ/R-FAQ.html#What-is-R_003f. Retrieved 9 February 2016.
  2. Hornik, Kurt and Leisch, Friedrich (1 January, 2001). “Editorial”. https://www.r-project.org/doc/Rnews/Rnews_2001-1.pdf. Retrieved 9 February 2016.
  3. The R Core Team (1 January, 2001). “What is R?”. https://www.r-project.org/doc/Rnews/Rnews_2001-1.pdf. Retrieved 9 February 2016.


The quantified self

The Wikipedia page on the Quantified Self provides a detailed overview of the emergence and development of the Quantified Self movement.

It includes references to Gary Wolf. His overview [1] of the emergence of technologies to make self-tracking more accessible includes this observation:

In the past, the methods of quantitative assessment were laborious and arcane. You had to take measurements manually and record them in a log; you had to enter data into spreadsheets and perform operations using unfriendly software; you had to build graphs to tease understanding out of the numbers. Now much of the data-gathering can be automated, and the record-keeping and analysis can be delegated to a host of simple Web apps. The makes it possible to know oneself in a new way.

Kevin Kelly[2] suggested in his preliminary discussion of the Quantified Self:

Unless something can be measured, it cannot be improved. So we are on a quest to collect as many personal tools that will assist us in quantifiable measurement of ourselves. We welcome tools that help us see and understand bodies and minds so that we can figure out what humans are here for.

Some contributors to the discussions about the Quantified Self have characterised self-tracking as Personal Informatics.[3][4][5]

Deborah Lupton[6][7] has explored the interface of critical social research and human-computer interaction (HCI) in personal informatics.

A body of literature has now been established of research that has sought to investigate the social, cultural and political dimensions of self-tracking, nearly all of which has come out in the last few years. This literature complements an established literature in human-computer interaction research (HCI), first into lifelogging and then into self-tracking (or personal informatics/analytics, as HCI researchers often call it).

Ben Williamson[8] has used the term ‘algorithmic skin’ to discuss “the ways that health-tracking produces a body encased in an ‘algorithmic skin’, connected to a wider ‘networked cognitive system’”.

Deborah Lupton and Ben Williamson are examples of a reflexive approach to “data-led and algorithmically mediated understandings of the body”[9]

activity

Technocratic engineers?

This course looks explicitly at ethical issues related to monitoring and surveillance. Andrew Manley and Shaun Williams [10] observe

Coaches often voice a humanistic approach to their practice; however, with an increased reliance on surveillance and data capture, it may be that we are visibly witnessing the caricature of the elite coach morphing into the modern-day technocrat engineer. And while it can appear to drive performance, at such an unrelenting pace it may not be sustainable in the long run”.

Do you share their concerns?

References

  1. Wolf, Gary. “QS & The Macroscope”. http://antephase.com/themacroscope. Retrieved 20 January 2016.
  2. Kelly, Kevin. “What is the Quantified Self?”. http://www.webcitation.org/66TEY49wv. Retrieved 20 January 2016.
  3. Wilson, H. James (September 2012). “You, By the Numbers”. https://hbr.org/2012/09/you-by-the-numbers. Retrieved 20 January 2016.
  4. “Adventures in Self-Surveillance, aka The Quantified Self, aka Extreme Navel-Gazing”. Forbes. April 7, 2011. http://www.forbes.com/sites/kashmirhill/2011/04/07/adventures-in-self-surveillance-aka-the-quantified-self-aka-extreme-navel-gazing/. Retrieved 20 January 2016.
  5. “Counting every moment”. The Economist. Mar 3, 2012. http://www.economist.com/node/21548493. Retrieved 20 January 2016.
  6. Lupton, Deborah (12 January 2016). “Critical social research on self-tracking: a reading list”. https://simplysociology.wordpress.com/2016/01/12/critical-social-research-on-self-tracking-a-reading-list/. Retrieved 26 February 2016.
  7. Lupton, Deborah (15 February 2016). “Interesting HCI research on self-tracking: a reading list”. https://simplysociology.wordpress.com/2016/02/15/interesting-hci-research-on-self-tracking-a-reading-list/. Retrieved 26 February 2016.
  8. Williamson, Ben (2014). “Algorithmic skin: health-tracking technologies, personal analytics and the biopedagogies of digitized health and physical education”. Sport, Education and Society 20(1): 133-151.
  9. Williamson, Ben (2014). “Algorithmic skin: health-tracking technologies, personal analytics and the biopedagogies of digitized health and physical education”. Sport, Education and Society 20(1): 133.
  10. Williams, Shaun (4 December 2014). “‘Big Brother’ surveillance in elite sport is pushing a culture with a machine mentality”. https://theconversation.com/big-brother-surveillance-in-elite-sport-is-pushing-a-culture-with-a-machine-mentality-34214. Retrieved 26 February 2016.


Meta review

activity

Using integrated technology in team sports

Carla Dellasera, Yong Gao and Lynda Ransdell[1] have provided a narrative qualitative review of “IT’s emerging impact in sport settings”. Their paper reviews thirty-nine publications and has eighty-three references. As part of your investigation of the quantified self, can you read this paper and reflect on the benefits of finding an authoritative meta review? Are there any issues in learning about research activity without accessing the primary source?

Reference

  1. Dellaserra, Carla, Gao, Yong and Ransdell, Lynda (3 January 2014). https://www.researchgate.net/publication/258829053.. Retrieved 26 February 2016.


Quantifying performance

Technological innovation has made it increasingly possible to monitor performance (other-tracking) in unobtrusive ways. The growth in personal fitness and well-being devices has made it possible to track one’s own performance (self-tracking). Shona Halson (2014) provides an overview of a range of approaches that combine other and self tracking measures of training load. This overview contributes to the discussion of the use made in quantifying performance of objective and subjective measures of well-being [1].

Reference

  1. Saw, Anne, Main, Luana and Gastin, Paul (2015). http://bjsm.bmj.com/content/early/2015/09/30/bjsports-2015-094758.long.. Retrieved 3 March 2016.


ePortfolio activity

If you are compiling an ePortfolio for this course, it is likely that you will be addressing some of these ethical issues in your own practice. You might want to consider this activity as a trigger for your own reflections about how we observe and monitor performance in training and competition.

portfolio activity

The ownership of performance data

Innovations in monitoring technologies are giving rise to what Robin James [1] refers to as “contemporary algorithmic culture”.

What are your thoughts on the interface between the overt monitoring of performance and the privacy that each individual might expect in a digital world?

Perhaps this 20 minute video presentation by Maciej Ceglowski might be a good place to start.


Reference

  1. Robin James. “Cloudy Logic”, 2015. Retrieved on 12 January 2016.


Reading about ethical issues

activity

Philosophy

You can find a variety of readings about some philosophical issues here.

activity

Socio-cultural aspects

You can find a variety of readings about some socio-cultural issues here.

activity

Pedagogy

You can find a variety of readings about some pedagogical issues here.

activity

Privacy and anonymity

You can find a variety of readings about privacy and data anaonymity issues here.


Epistemic culture

The study of informatics and analytics offers opportunities to explore on our shared epistemic culture.

Karin Cestina[1] says of an epistemic culture:

Everyone knows what science is about: it is about knowledge, the ‘objective’ and perhaps ‘true’ representation of the world as it really is. The problem is that no one is quite sure how scientists and other experts arrive at this knowledge. The notion of epistemic culture is designed to capture these interiorised processes of knowledge creation. It refers to those sets of practices, arrangements and mechanisms bound together by necessity, affinity and historical coincidence which, in a given area of professional expertise, make up how we know what we know. Epistemic cultures are cultures of creating and warranting knowledge.

She adds that “the focus in an epistemic culture approach is on the construction of the machineries of knowledge construction” [2] (our emphasis).

In this topic, there are three examples of the ethical issues raised by sport informatics and analytics: philosophy; socio-cultural; and pegadagogy.

References

  1. Karin Cestina. “Culture in global knowledge societies: knowledge cultures and epistemic cultures”, 1999, p.363. Retrieved on 12 January 2016.
  2. Karin Cestina.“Culture in global knowledge societies: knowledge cultures and epistemic cultures”, 1999, p.363. Retrieved on 12 January 2016.


Introduction

This course offers opportunities to reflect on some of the ethical issues raised by the use of informatics and analytics in sport.

One of the three aims that guide this course is “To contribute to discussions about the epistemological foundations of sport informatics and analytics and the flourishing of ethical practice in the observation, recording and analysis of performance in play, games and sport”.

This is a time of rapid expansion of informatics and analytics as fields of study and as practical activities. We hope that discussions about the quantification of performance will lead to reflections about the ethical framework within which we will work as our digital world is transformed by technological innovation.

In this topic you will have the opportunity to consider:

  • Philosophical issues
  • Socio-cultural issues
  • Pedagogy
  • Privacy and anonymity


Introduction

This course offers opportunities to reflect on some of the ethical issues raised by the use of informatics and analytics in sport.

One of the three aims that guide this course is “To contribute to discussions about the epistemological foundations of sport informatics and analytics and the flourishing of ethical practice in the observation, recording and analysis of performance in play, games and sport”.

This is a time of rapid expansion of informatics and analytics as fields of study and as practical activities. We hope that discussions about the quantification of performance will lead to reflections about the ethical framework within which we will work as our digital world is transformed by technological innovation.

In this topic you will have the opportunity to consider:

  • Philosophical issues
  • Socio-cultural issues
  • Pedagogy
  • Privacy and anonymity


ePortfolio questions

portfolio activity

Portfolio Activity

As you work your way through this theme and compile your ePortfolio, you might like to consider these six questions.

Q19. Is ‘augmented information’ a helpful description of the ways you share information?

Q20. Does ‘feedforward’ have any place in your work?

Q21. Do you have any experience of using infographics?

Q22. Are there any visualisation approaches that you recommend?

Q23. Is the concept of a personal learning environment helpful in your practice?

Q24. Can you visualise your personal learning environment?


ePortfolio questions

portfolio activity

Portfolio Activity

As you work your way through this theme and compile your ePortfolio, you might like to consider these six questions.

Q13. What aspects of this topic are of particular interest to you?

Q14. What criteria would you use to decide which wearable technologies to use?

Q15. Were there any aspects of the work of universities, centres, and sport you found informative?

Q16. Have you used any video tracking technology or video monitoring data?

Q17. Do you have any concerns about the validity and reliability of data gathered from wearable technologies or video tracking?

Q18. Are there any ethical issues involved in monitoring data collected in the ways discussed in this theme?


Video signpost

In this video, Jocelyn Mara discusses how she monitors athlete performance. Jocelyn is a graduate of the University of Canberra and was a Teaching Fellow in the Department of Sport and Exercise at the University in 2015. She has been a postgraduate scholar in performance analysis at the Australian Institute of Sport. Her PhD research included working with the Canberra United football team.



Overview

Performance monitoring is included as Theme 3 in this course. This is a time of rapid change on the quantification of one’s own as well as others’ performances.

This theme:

  • Provides a background to performance monitoring.
  • Discusses the development of wearable technologies to monitor performance.
  • Explores motion and video tracking technologies.


ePortfolio questions

portfolio activity

Portfolio Activity

As you work your way through this theme and compile your ePortfolio, you might like to consider these six questions.

Q 7. What is systematic about ‘systematic’ observation?

Q 8. Do we need to concern ourselves about the reliability and validity of data?

Q 9. Why is it important to de-identify performance data?

Q10. What did you discover in the shared dataset?

Q11. What have you learned about supervised learning approaches?

Q12. What are your thoughts about how we relate patterns of performance to moments of performance within games?


Theme activities

Reading about pattern recognition

activity

Activity

Take some time to explore the range of resources for this theme. You might like to start with a summary of five papers on pattern recognition.

Analysing AFL GPS data

activity

Activity

Background
An Australian Rules football team shared with us a whole game GPS data set from a game played in the 2014 season.

The data can be found here.

The team that provided the data won the game and scored the same number of points each of the four quarters of the game.

The scores by quarter in the game were:

  • 33 v 22 (Q1)
  • 33 v 12 (Q2)
  • 33 v 26 (Q3)
  • 33 v 37 (Q4)

Questions

  1. Do the data available help map player effort in relation this scoring pattern?
  2. What inferences can you draw from these data?


Resources

The resources to support this theme include:


Overview

The conceptualisation and operationalisation of pattern recognition are foundations of sport informatics and analytics. This theme (theme 2 of the course):

  • Discusses systematic observation of performance.
  • Introduces supervised learning approaches to data analysis.
  • Explores the connections between performance trends and athlete actions.

We present a data set for you to analyse in this theme.


Overview

The conceptualisation and operationalisation of pattern recognition are foundations of sport informatics and analytics. This theme (theme 2 of the course):

  • Discusses systematic observation of performance.
  • Introduces supervised learning approaches to data analysis.
  • Explores the connections between performance trends and athlete actions.

We present a data set for you to analyse in this theme.


portfolio activity

Portfolio Activity

As you work your way through this theme you might like to consider these six questions.

Q1. What are your thoughts about a non-linear course in which you are the driver of your own learning pathway?

Q2. How might we help you to connect with others on the course?

Q3. Do you have any resources you would like to recommend for inclusion in the course?

Q4. Is there a difference between Informatik and Informatics?

Q5. What distinguishes Sport Analytics from Sport Informatics?

Q6. What is the relationship between Informatik, Informatics, Analytics and Performance Analysis?


The theme overview provides a framework to our approach to Sport Informatics and Analytics.

There is a Google Slides presentation.

There is a mind map for this theme.

There is more background information about Informatics and Analytics on this Google Site.

There are some Video suggestions. (See slides 2-4)

Daniel Link’s (2009) presentation Interdisciplinarity in Sport Informatics.

There are some additional resources here.


objective

Overview

This is the first theme in our course. We are keen to share with you our approach to open sharing and introduce some of the ideas central to a mind shift from “Not invented here” to “proudly borrowed from there[1].
This theme:

  • Explores our approach to open sharing.
  • Introduces people, perspectives, products and processes.
  • Draws attention to the Informatik tradition and its links with sport informatics.
  • Discusses the emergence of sport analytics.

Reference

  1. Cited by Cable Green of Creative Commons during presentations. See for example: http://bccampus.ca/2012/11/08/proudly-borrowed-from-there/


objective

Overview

This is the first theme in our course. We are keen to share with you our approach to open sharing and introduce some of the ideas central to a mind shift from “Not invented here” to “proudly borrowed from there[1].
This theme:

  • Explores our approach to open sharing.
  • Introduces people, perspectives, products and processes.
  • Draws attention to the Informatik tradition and its links with sport informatics.
  • Discusses the emergence of sport analytics.

Reference

  1. Cited by Cable Green of Creative Commons during presentations. See for example: http://bccampus.ca/2012/11/08/proudly-borrowed-from-there/


objective

Overview

This is the first theme in our course. We are keen to share with you our approach to open sharing and introduce some of the ideas central to a mind shift from “Not invented here” to “proudly borrowed from there[1].
This theme:

  • Explores our approach to open sharing.
  • Introduces people, perspectives, products and processes.
  • Draws attention to the Informatik tradition and its links with sport informatics.
  • Discusses the emergence of sport analytics.

Reference

  1. Cited by Cable Green of Creative Commons during presentations. See for example: http://bccampus.ca/2012/11/08/proudly-borrowed-from-there/


Twenty-four questions

If you choose to compile an ePortfolio for this course, you might find these questions helpful in reflecting on your learning pathway. There are six questions for each of the four themes in the course.

Theme 1 (Introductions)

Q1. What are your thoughts about a non-linear course in which you are the driver of your own learning pathway?

Q2. How might we help you to connect with others on the course?

Q3. Do you have any resources you would like to recommend for inclusion in the course?

Q4. Is there a difference between Informatik and Informatics?

Q5. What distinguishes Sport Analytics from Sport Informatics?

Q6. What is the relationship between Informatik, Informatics, Analytics and Performance Analysis?

Theme 2 (Pattern recognition)

Q 7. What is systematic about ‘systematic’ observation?

Q 8. Do we need to concern ourselves about the reliability and validity of data?

Q 9. Why is it important to de-identify performance data?

Q10. What did you discover in the shared dataset?

Q11. What have you learned about supervised learning approaches?

Q12. What are your thoughts about how we relate patterns of performance to moments of performance within games?

Theme 3 (Performance monitoring)

Q13. What aspects of this topic are of particular interest to you?

Q14. What criteria would you use to decide which wearable technologies to use?

Q15. Were there any aspects of the work of universities, centres, and sport you found informative?

Q16. Have you used any video tracking technology or video monitoring data?

Q17. Do you have any concerns about the validity and reliability of data gathered from wearable technologies or video tracking?

Q18. Are there any ethical issues involved in monitoring data collected in the ways discussed in this theme?

Theme 4 (Audiences and messages)

Q19. Is ‘augmented information’ a helpful description of the ways you share information?

Q20. Does ‘feedforward’ have any place in your work?

Q21. Do you have any experience of using infographics?

Q22. Are there any visualisation approaches that you recommend?

Q23. Is the concept of a personal learning environment helpful in your practice?

Q24. Can you visualise your personal learning environment?


Assessment strategy

The course is offered in two modes or learner pathways.

The first pathway shares the content as a personal learning opportunity. This mode is not assessed and is learner-directed.

The second pathway is assessed through the submission of an electronic portfolio (ePortfolio). This requires registration with the University of Canberra. Note that whilst this pathway is also learner-directed, there are some important formative assessment points in the course.

There is more information about the assessment for this course here.


Course aim

The aim of this OERu course is to explore the intersection of informatics and analytics in sport contexts.

Learning Outcomes

outcomes

Outcomes

We anticipate that at the completion of this course, you will be able to:

  • Demonstrate disciplined and critical insights into the observation, recording and analysis of performance in sport training and competition environments.
  • Apply knowledge of better practice in sport informatics and analytics to your own sport contexts.
  • Reflect critically on the use of sport informatics and analytics in order to anticipate and develop opportunities to transform your own and others’ performances.


Thematic content of the course (T)

Introductions (T1)

This theme introduces you to informatics and analytics and locates them within the study of sport performance. It discusses a connectivist approach to learning in an open, online course. For more information about T1 see this page.

Pattern recognition (T2)

This theme explores a range of options available to you in the analysis of sport performance. It includes a discussion of the use of R in data analysis. For more information about T2 see this page.

Performance monitoring (T3)

This theme explores the development and use of technologies to monitor performance. It discusses ethical issues relating to the quantification of the self. For more information about T3 see this page.

Audiences and messages (T4)

This theme considers some of the issues relating to the visualisation and sharing of data. It includes a discussion of digital storytelling. There is a discussion of the use of feedforward to share data with a range of audiences. For more information about T4 see this page.


6983808533 f9e45f93fc b.jpg

Course name: Sport Informatics and Analytics

This course explores the links between informatics and analytics in sports contexts.

Course metrics

  • Notional Learning Hours: 150 hours
  • Duration: learner defined
  • Assessment: formative and summative assessment of an ePortfolio.
  • Formal credit option: yes
  • Course: a stand alone course
  • Level: Masters

What is it about?

This is an OERu course in Sport Informatics and Analytics. It is a Masters level course.

The content shared here was developed by staff at the University of Canberra, Australia, for an open, online course in February 2015. It was included in the University of Canberra’s Master of High Performance Sport for the first cohort of students in the 2015 academic year. The content of the course is updated continuously.

What will you learn?

This course explores the links between informatics and analytics in sports contexts at a time of growing interest in the observation and analysis of sport performance. The course looks at four themes. The learning outcomes for this course can be found here.

What is involved?

This course offers opportunities to follow the content in a linear or non-linear way. We acknowledge that as a learner you will make decision about your interests within the course content. We provide a range of resources to support your learning journey.

We have planned the course with Jo Ito’s observation in mind: “education is what people do to you and learning is what you do to yourself”.[1]

You can find a course outline here.

What prerequisites?

This course is designed for people who work in sport performance environments or who aspire to do so. We anticipate that the content here might be of interest to a wider audience too for whom sport is an important part of their lives. Although we introduce some basic ideas around statistics and visualisation, no prior knowledge is anticipated in the planning of the course content.

How long does the course take?

The study time for this course is estimated to be 150 hours. However, we anticipate learners will take as long as they need to satisfy their interests in this course.

What credit options are available?

The University of Canberra offers an opportunity for students to gain a credit for this course through the submission of an electronic portfolio (ePortfolio). Information about the ePortfolio requirements for this course can be found on this page.

Course outline

There are fifteen topics in this course. These are planned within the context of four themes (T1, T2, T3 and T4). These topics can be studied as weekly activities or in much more intensive periods of study. Note that topics 5, 10 and 15 relate to the ePortfolio assessment of this course.

  1. Introductions (T1).
  2. Pattern recognition (T2).
  3. Performance monitoring (T3).
  4. Audiences and messages (T4).
  5. Compile and submit Introduction ePortfolio.
  6. Ethical issues (T1).
  7. The quantified self (T3).
  8. Using R (T2).
  9. Visualising data (T4).
  10. Compile and submit Development ePortfolio.
  11. Feedforward (T4).
  12. Communities of Practice (T1).
  13. Knowledge Discovery (T2).
  14. Informatics and Analytics (T1, T2, T3, T4).
  15. Refine, complete and submit ePortfolio.

Learning outcomes

We anticipate that at the completion of this course, you will be able to:

  • Demonstrate disciplined and critical insights into the observation, recording and analysis of performance in sport training and competition environments.
  • Apply knowledge of better practice in sport informatics and analytics to your own sport contexts.
  • Reflect critically on the use of sport informatics and analytics in order to anticipate and develop opportunities to transform your own and others’ performances.

For more information about these learning outcomes see this page.

ePortfolio

This course is assessed through the submission of an ePortfolio. The ePortfolio carries 100% of the marks available in this course.

For information about the ePortfolio requirements for this course see this page.

Web resources

A list of recommended and suggested readings for this course can be found here.

Reference


6983808533 f9e45f93fc b.jpg

Course name: Sport Informatics and Analytics

This course explores the links between informatics and analytics in sports contexts.

Course metrics

  • Notional Learning Hours: 150 hours
  • Duration: learner defined
  • Assessment: formative and summative assessment of an ePortfolio.
  • Formal credit option: yes
  • Course: a stand alone course
  • Level: Masters

What is it about?

This is an OERu course in Sport Informatics and Analytics. It is a Masters level course.

The content shared here was developed by staff at the University of Canberra, Australia, for an open, online course in February 2015. It was included in the University of Canberra’s Master of High Performance Sport for the first cohort of students in the 2015 academic year. The content of the course is updated continuously.

What will you learn?

This course explores the links between informatics and analytics in sports contexts at a time of growing interest in the observation and analysis of sport performance. The course looks at four themes. The learning outcomes for this course can be found here.

What is involved?

This course offers opportunities to follow the content in a linear or non-linear way. We acknowledge that as a learner you will make decision about your interests within the course content. We provide a range of resources to support your learning journey.

We have planned the course with Jo Ito’s observation in mind: “education is what people do to you and learning is what you do to yourself”.[1]

You can find a course outline here.

What prerequisites?

This course is designed for people who work in sport performance environments or who aspire to do so. We anticipate that the content here might be of interest to a wider audience too for whom sport is an important part of their lives. Although we introduce some basic ideas around statistics and visualisation, no prior knowledge is anticipated in the planning of the course content.

How long does the course take?

The study time for this course is estimated to be 150 hours. However, we anticipate learners will take as long as they need to satisfy their interests in this course.

What credit options are available?

The University of Canberra offers an opportunity for students to gain a credit for this course through the submission of an electronic portfolio (ePortfolio). Information about the ePortfolio requirements for this course can be found on this page.

Course outline

There are fifteen topics in this course. These are planned within the context of four themes (T1, T2, T3 and T4). These topics can be studied as weekly activities or in much more intensive periods of study. Note that topics 5, 10 and 15 relate to the ePortfolio assessment of this course.

  1. Introductions (T1).
  2. Pattern recognition (T2).
  3. Performance monitoring (T3).
  4. Audiences and messages (T4).
  5. Compile and submit Introduction ePortfolio.
  6. Ethical issues (T1).
  7. The quantified self (T3).
  8. Using R (T2).
  9. Visualising data (T4).
  10. Compile and submit Development ePortfolio.
  11. Feedforward (T4).
  12. Communities of Practice (T1).
  13. Knowledge Discovery (T2).
  14. Informatics and Analytics (T1, T2, T3, T4).
  15. Refine, complete and submit ePortfolio.

Learning outcomes

We anticipate that at the completion of this course, you will be able to:

  • Demonstrate disciplined and critical insights into the observation, recording and analysis of performance in sport training and competition environments.
  • Apply knowledge of better practice in sport informatics and analytics to your own sport contexts.
  • Reflect critically on the use of sport informatics and analytics in order to anticipate and develop opportunities to transform your own and others’ performances.

For more information about these learning outcomes see this page.

ePortfolio

This course is assessed through the submission of an ePortfolio. The ePortfolio carries 100% of the marks available in this course.

For information about the ePortfolio requirements for this course see this page.

Web resources

A list of recommended and suggested readings for this course can be found here.

Reference