Readings for 9/12/18

Hello again πŸ˜ƒ

Please make sure you've subscribed to email updates as described in the welcome post.

As before, I've put up the readings for next week (see tab above). You will need your UoM email id/password to log in (e.g. foobar@memphis.edu would log in as foobar).

Make sure your respond with your comments by 9/11 at noon. Make your comments by replying/commenting to this post (the one you are looking at now).

If your post from last week showed up at "Unknown," this is because your profile settings are not public. Click here to change your settings.

Comments

  1. Parker-Oliver and Demiris’ (2006) assessment of depersonalization focuses on the difficulty of assessing nonverbal behavioral clues and cues about social roles. The challenge is essentially one for the social worker, not for the client. Access to more data and visual and social information could remedy this. In other words, it’s a problem of data collection and interpretation. But embodied theories of social cognition, like Shaun Gallagher’s (here at Memphis) interaction theory, suggest a more fundamental problem. Social cognition, for enactivists like Gallagher and myself, is essentially an intersubjective interaction between two (or more) embodied agents as a coupled system. Part of why different types of therapy like cognitive behavioral therapy (CBT), psychoanalysis and psychodynamic psychotherapies, and dance therapy all more or less work despite having radically incompatible theories of human personality and psychopathology is simply because they establish a dynamic, embodied interaction between client and therapist. This dynamic interaction—this transiently and intermittently coupled system—is lost in mediatized forms of therapy and reduces interaction to higher-order cognition like language (which CBT, psychoanalysis, and psychodynamic psychotherapy all likewise reduce the therapeutic process to, in theory if not in practice). Technology and big data can have potentially transformative effects, but perhaps this particular form of social work tech is too primitive (maybe VR is the way to go?).

    As for Elswick and colleagues (2016), this data doesn’t seem particularly “big.” Is there a way that big data, over and above data, can transform education?

    Finally, Hidalgo and Sekhon (2011) briefly mention the difference between ontological and epistemological claims of causation. One says reality is x, whereas the other more humbly says x fits our data. Can statistics (especially in the social sciences) ever get us to ontological claims—e.g., statistics using resampling methods like cross-validation and bootstrapping in James and colleagues (2013)—or are we stuck making epistemological ones?

    ReplyDelete
  2. The Computers in Human Behavior paper chapter provides some answers needed in classroom management strategies but is only limited to talk out and out of seat behaviors. It is a good start for the teacher to get familiar with manual and computer data collection. Seems a given that computer data collection is more accurate than manual data collection.
    The Social Work Informatics: A New Specialty gives the advantages of combining data, information and knowledge to advance the field of social work. Specially to reach clients with private issues and with clients in remote or under-served areas. It also note concerns with confidentiality, Code of Ethics, credentials of the provider and competence level of the provider. I am very concerned about the idea that clients will have no control of the information published in the internet. The Causality paper give good explanations on causal inferences and how it is affected in a variety of research designs. It give information that can be utilized to expand the methodology used the Social Work Informatics paper. (my own wishful thinking)

    ReplyDelete
  3. Parker-Oliver & Demiris (2006)
    I found this article particularly interesting because the publishing date of this article surprised me. I have grown up in what I feel has been a transition time where as a small child, not everything was electronic but most everything was moving in that direction very, very quickly. It surprised me that even in 2006 the social work discipline was still battling with the idea of moving into and using technology. The concerns addressed about patient confidentiality are all very real and concerning, but in today’s world most of us do not even think twice about our information being online (i.e., online health charts are available from almost every doctor, dentist, and optometrist). In regards to using technology to provide social work services, I think that the authors identified some very important concerns including depersonalization and “geographic and regulatory boundaries” for licensing. I would be interested to know how these issues have been addressed in the last decade.

    Elswick et al (2016)
    As a previous teacher, this article and the Good Behavior Game was also interesting. I have never heard of the good behavior game but have employed things very similar to this in my own classroom. It was not surprising to me that electronic collection of data was more accurate than written data as I have tried it both ways myself. Regardless of the method, the data is still not completely accurate. I would be interested to know how familiar the specific teachers were with technology. In today’s schools there is a wide span of technology familiarity. Many of the newer teachers are comfortable with technology; whereas, many of the older teachers struggle more. It makes me wonder if age differences would make a difference on this study; however, one could argue that because all new teachers coming in are likely to be more familiar with technology that it would not matter in the long run.

    Hidalgo & Sekhon (2011)
    The technical aspect of this paper, often went over my head and was beyond my current understanding of Causality. I did not realize there were so many different models of causality and the history behind each group of models. Here I would like to pose a question to those of you with more experience. What model of causality do you prefer to use or what model is “the best?” Does it depend on the situation?

    James et al (2013)
    Again, this text was a little more than my current understanding; however, I felt like I could understand it better than I could have previously. The sections on cross-validation were of particular interest. There seems to be many ways to do different types of cross-validation. I guess this goes back to last week where we discussed that the types of testing used depends on many factors. Even in my statistics course, we have been discussing tests that are more ideal than others. The options for how to test data can be very overwhelming. I think that is another reason it is so important for scientists/researchers to be honest and thoughtful in their research and not just “fit” a test to their data and desired outcomes.

    ReplyDelete
  4. Parker-Oliver & Demiris
    It doesn't surprise me that the use of technology in social work was controversial for a while, since many domains pushed back on the technological advances that ultimately changed how everyone in a particular field did their daily jobs. What did surprise me was that this article was published in the 21st century. Before reading this article, I would have assumed that by the year 2000 most domains would have started to embrace the advancing tech that was available to them, even if it wasn't integrated into every aspect of their domain or wasn't standardized yet. This is just a my own naivete since I'm young and, like Courtney, have grown up when everything was becoming very electronic. I do believe that some of the social worker's concerns were valid, such as the necessity that all client data remains confidential and un-hackable, the qualifications of online counselors, or that they won't be able to help their clients as well without the aid of nonverbal signals. I feel that many of the concerns talked about in this article would be solved today, 12 years later, but I wonder how remote/online social workers deal with the lack of nonverbal clues that help them read their clients. Are video chat services used to help read facial expressions?

    Elswick
    The Good Behavior Game was interesting to read about. I originally didn't know if hand written or electronic data collection would be better - my first instinct is to say that electronically captured data reduces possible errors and would be more accurate, however if a teacher has to stop what they are doing to use a device to report data, it could be inconvenient in the classroom and they would prefer hand written data. Though the teachers were still far from accurate, incorporating the GBG and technology could help improve student behavior which is neat! Additionally, many school systems are using much more technology in the classroom than even 4-5 years ago; personally, I know that my home district gives Chromebooks to all the students, starting in 4th and 5th grade, and they do assignments and work electronically. The usage of technology in classrooms will only increase, not decrease, so it is good that methods involving tech are being researched in applied settings like this one.

    Sekhon
    I thought causality was somewhat simple - The presence of IV "A" changes the outcome of DV "B", which would have been significantly different in the absence of "A", so we can say that "A" caused the change in "B"
    I didn't know that there were any models of causality. In hindsight, it makes sense that there is at least one, since some philosopher would have had to figure out causality logic at some point, but there are many models that this paper discusses. I almost feel like I should write out a table to compare them all to help keep them separate and visualize the differences. Is there a normally accepted model of causation today? Are different models of causation used in different scenarios? It makes sense that multiple models could be in use, but science generally likes to have a universal standard that scientists follow.

    Intro to Statistical Learning
    While I have heard the term "cross validation" numerous times from researchers and in the literature, I've never used cross validation myself and don't really understand how it works. The graphs in this text are truly the only way I somewhat understand what is being explained, reading all the variations doesn't really make sense to me since I haven't ever cross validated any data. What I can definitely gather is that cross validation is helpful when fitting data, but it can also be very variable, which the LOOCV and the k-fold approach attempt to fix. I'm still unsure of when data should be cross validated, though - is there a certain type of data that always should be cross validated, or is it purely situational?

    ReplyDelete
  5. After reading the Elswick article, I think my primary question would be WHY is it that computer-based data collection result in better outcomes than hand calculated collection? There was major discussion about the results of each method, but not a lot of discussion as to why. With my exposure to classroom dynamics, I wonder how effective this particular technique works compared to other models or approaches—that is what makes it successful or advantageous in comparison to others?

    Based off of the overarching gist of the Parker-Oliver article, I got the notion that the benefits of transitioning to technology-based practices didn’t really exceed the challenges and barriers. This very well could have been a result of the users rather than the actual technology, so I would be curious to see comparable data to date?— considering this data was logged a good twelve years ago. I would pose that now, workers are far more well-rounded in exposure to technology systems and probably expect that assessment be more heavily internet-based. I also think that issues mentioned have been reformed and cleaned up a good bit. For example, because there has been such a rise in technology based systematics and expectation to be competent in such areas, I wouldn’t think inequality to access of technological resources or issues regarding lack of user knowledge would pose such a great issue now. Basically, with all the updated resourcing we do have access to, I would find it interesting to see if there are still similar issues mentioned in this article.

    Between the Causality paper and the Introduction of Stats reading, I am starting to realize just how many different models and variations exist. (I mean I knew there were a lot but wow). First off, did not realize causality can get as complex as what the content of the paper was describing but I would assume it is determined a lot case-by-case and it is not always as complex as some of these models illustrate. Similarly, James et al (2013) states that cross validation is a general approach that can be applied to almost any statistical method—with that being said, is it used often? same with the specific types—LOOCV, and k-fold CV (both mentioned pretty heavily in the text) are these methods used regularly in computing statistical tests? Likewise are several different variations of cross validation often run on the same set of data? I have not run enough tests to be familiar or to really know.

    ReplyDelete
  6. The Parker-Oliver article highlighted a few important issues in the integration of data science, social science, and care work. So far in class, we’ve talked a lot about the research-application interplay that is at the forefront of cognitive science, especially with respect to psychology. However, expanding into sociology and social work forces us to encounter interesting new problems regarding data generation, privacy, and ethics that we may take for granted in some psychological contexts. I was particularly interested the distinction the authors made between the emerging roles of researchers and practice innovators. Is that distinction also a necessary one in cognitive science? In regards to psychology specifically, has that taken shape in the divergence (and interplay) of clinical and experimental fields, or could this develop into something that isn’t present in the current system?

    I thought the Elswick article was an interesting study, and an intriguing change of pace in that it’s an empirical study rather than the general texts and guidelines we’ve been reading. Ultimately, the results of the study underscore a limitation in data science that may be underrepresented: the quality of data collection matters. While it’s easy enough to throw gobs of data at a predictive problem, much of the data we use occurs in a context that sometimes can’t be ignored. The results regarding the display of the scores of the Good Behavior Game are interesting, but I think the results regarding the teacher’s accuracy and behaviors are even more telling of how the design of data collection must be considered in large scale analytics. Ensuring the most accurate (i.e. reliable and unbiased) data possible will help predictive models that use this data more directly than increasing the amount of data. Computerization seems to help, but as this study shows, there is still a long way to go in merging data science and applied settings, educational or otherwise.

    The Sekhon article on causality illuminated the propositional logic of causality. The formulas listed under the Neyman-Rubin model make a lot of sense when broken down into their constituent parts. I can understand the authors’ suspicion that applying true conterfactual causality is extremely difficult in social science despite apparent experimental counterfactual manipulation since the factors directly manipulated by the experimental conditions never exist in a vacuum. Rather, the experimentally manipulated factors interact within a complex network of mediating psychosocial variables that in turn influence observable results. This is difficulty is especially salient given the desire to discover direct causal mechanisms. It seems to follow, then, that SEM and qualitative observations can inform experimental interpretation, but the authors note that these methods of analysis require more and stronger assumptions to be met. Since the goal of psychological science is at least equally to explain behavior as it is to predict behavior, it seems that SEM allows us to make strides in discovering the complex mediating factors within our search for causality, but is it a satisfactory step towards the discovery of causal mechanisms? More specifically, is creating a network of mediating black box interactions that lead to a prediction more or less useful than tentatively assuming causality due to experimental manipulations that approximate conterfactual evidence? My guess is “probably,” but I’m curious to have this expanded upon from other perspectives.

    In the textbook, it was good to get some clarification on what exactly training and testing sets are and how they come to be. We encountered this in the lab portion of class, but I’m glad to have the rundown of how these get created and why they are necessary in determining how models may predict new, incoming data. I think that using models on testing sets as an analogue for how they will preform on totally new, incoming data is an overlooked point in how data science is discussed at basic levels.

    ReplyDelete
  7. I was unaware that so many in social work were resisting the implementation of technology and informatics in their field. While I understand that some of the concerns with implementing technology in social work are valid, it seems to me that many of the problems are rather typical of integrating any new technology. It merely becomes a matter, as the article suggests, of doing proper research and discovering in what ways the technology can be implemented to maximum benefit. It will likely not be a field that is entirely taken over by technology. However, it seems that the people for whom the technology is determined to not be helpful could still rely on older methods, while the kinds of people and problems that can benefit most from it can still make use of it. If the problem of licensing and credentialing can be solved, I think it then merely becomes a matter of practitioner and client preference of medium (whether explicit preference is discussed or if a particular method is determined to be more effective for said client). I would definitely imagine there are great individual differences for both practitioner and client as to what methods work the best between them.

    The Elswick et al (2015) article seemed more to simply be about whether implementing techniques that utilize technologies in classrooms are beneficial. I am unsure as to what the implications are for big data. I also think it should be studied and discussed how these techniques actually effect individual students. It was clear for the entire class that the problem behaviors decreased, but for which children was it more effective? This was Were most of the problem behaviors caused by the one so-called “problem student”, and if so was the decrease observed due to better behavior by that one student or the other less troublesome students?

    It is a bit unclear to me how Hume’s regularity model is so different from a statistical model, or at least how statistics could not be readily applied to it. Even if it might not be applicable in a real situation, it does still seem that one can make an appeal to statistics since his model relies upon all potential observable instances of two events being together. I do think, however, that Lewis’ counterfactual model makes a great deal of sense, especially his appeal to possible worlds. I wasn’t entirely clear on certain aspects of the statistical models as related to counterfactual models. It seemed to me as though they were talking about how the use of randomization acts as a kind of proxy to construct “possible worlds” to compare with the actual world. I have never heard it put this way, but I think it’s a very intuitive way to think about how we want to discover causality in experimentation. I suppose it is similarly the case for observational studies, but it is much less clear how observational studies should seek to simulate close enough “possible worlds”.

    The textbook reading mostly discussed subjects I had been exposed to in previous classes, but it was beneficial to get a refresher and a different kind of explanation. I think leave one out and k-fold cross validation techniques are pretty intuitive, but the section on bias-variance trade off was very helpful. I would still like to better understand how to balance or reduce bias and variance, especially how using 5-fold or 10-fold cross validation seems to balance bias and variance the best. How were those specific values arrived at?

    ReplyDelete
  8. Like the article Parker-Olivier talked about, I share concerns about the use of technology in social work. Not everyone has access to a computer, and their may be privacy concerns in some situations. Nonetheless, social work informatics seems promising as a social worker seeing or treating people hundreds or thousands of miles away through a computer is better than nothing at all. Many people, especially in remote areas, might be unable to talk to anyone who can help them with their problems. The studies the paper cited are about 15-20 years old, however, and I'd be interested to know what contemporary informatics in social work looks like and how it's being used.
    The good behavior game in the Elswick paper was new to me. I wish they would have sought student opinion on the game, or showed it increased student social validation. It's interesting that this is such a well established technique, the primary dependent variable was teacher accuracy in data collection with hand based collection vs computer based collection. As the paper mentioned, a possible problem is that this technology cannot be implemented in all classrooms everywhere due to cost.

    In the statistical learning chapter on resampling methods, will the validation set approach better work on large samples, or could small samples work too? Is there a minimum number of observations required?

    In the Sekhon causality paper, there was a lot I had not learned about. The Neyman-Rubin model and Structural Equation Model were interested to read about, if hard to understand. Do you think it is necessary for scientists in most fields to know about these models in order to run studies and analyze data?

    Davis

    ReplyDelete
  9. The Parker-Oliver (2006) article was an interesting take on the incorporation of technology, as many disciplines are rapidly changing to integrating more technology into their practice. The concerns of privacy and accessibility are definitely important, but I feel like further research could be done today to help combat those issues. It would be interesting to see if some kind of app could be developed, similar to the ones used by doctors to give patients access to their "medical information portal" or however they choose to phrase it. Containing all of the interactions into one secured place could be a way to combat the hassle of recording online communications as well as reduce the risk of information being leaked. Having it available via smartphone could also help with accessibility, as people may have phones but not computers, and there has definitely been an increase in smartphone ownership since 2006, when the article was published.

    The Elswick (2016) article was interesting, as I would have guessed that the use of technology in this case would have been more distracting to the teacher and the students, and would have resulted in less accurate data collection. Looking at the results, I suppose the remote may have been less tedious for the teacher than having to stop and make tally marks, allowing them to record data more easily and devote more time to the "behavior specific praise statements." I also wonder if the students would have been more engaged and responsive to the computerized data collection because they grew up around technology, and it was bringing their world into the classroom.

    Both the Sekhon reading as well as the Intro to Statistical Learning reading really drove home the point that, with so many different options available, researchers really should think carefully and select the best statistical test that will work for their study. Basic stats classes make it seem much more cut and dry than it really seems to be, especially with more complex research. One of my favorite lines from the Sekhon reading was when the author stated that "a test of a hypothesis with a design susceptible to hidden bias is not particularly severe or determinative," as I feel like this is a point that a lot of people in the non-academic world sometimes don't realize. If basic information about causality was provided in more accessible ways, I feel like there would be less individuals who are misled by "scientific studies" they share on Facebook.

    ReplyDelete
  10. Mill suggests that units ought to be exactly alike prior to treatment in order for an experiment to support the causal influence of one. Though they acknowledge that this is difficult in the social sciences (and perhaps Mill was among the early proponents of the view of psychology and related studies as “soft science”), it raises a question about how well we do study the phenomena we purport to study. I realize many other ideas he proposed have been revised or simply are not conducive to research (that is, certain ideas may be valid but do not contribute to progress, such as questioning the existence of an objective world—without belief that there is one, there really is no point in doing research at all); however, across the conceptualizations described, the idea re-emerges about what we refer to as construct validity—that we are studying what we claim (e.g., the idea of multiple possible universes or SEM). This raises a question: Can all the movements in big data, improved statistics, or machine learning compensate for the fact that many topics within the social sciences are inherently “fuzzy” or subjective and move beyond “theories” that are simply correlations between self-report measures (e.g., the role of resilience in well-being).
    The Elswick article found another interesting phenomenon, that teachers’ observations of these behaviors varied by data collection method. It was somewhat difficult for me to follow the verbose description of the methods and some serious jargon, but it seems strange that they needed to raise the question of whether technology can improve the quality of data. I can see no argument other than tradition for why someone would resist the idea of improved data collection methods.
    The Social Work Informatics raised the opposition to technology issue again, though I feel this may be a more legitimate argument. For example, one compelling argument was against the use of “e-therapy” or related practices. They raise the point that many social workers (and many forms of counselors/clinicians) need to be able to see non-verbal behavior. As an unmentioned counterargument, one could also imagine that the more anonymous format of chat may allow people to open up more. For example, discussing embarrassing situations may be easier to do when you do not have to look someone in the eye. Regardless of how non-judgmental someone claims to be, or truly is, it can be very difficult to overcome our predisposition to holding back these details.
    Finally, the James chapter raise the variance-bias trade-off again. I had not really heard this expressed in these terms before this course (only related ideas, like overfitting), and it is interesting to see how these approaches attempt to handle the problem. I wonder how well each of these methods would hold up on novel data. That is, even with a large dataset, repeated combinations of the data as samples are still drawing on a single large sample. How might these models generalize to the actual population then (or, more realistically, a new sample of the same population)? Overfitting is a problem within the context of the training and testing set, but does bootstrapping improve generalizability to a never-seen sample or does it result in its own form of overfitting, where it overfits based on the sample provided and actually takes us further away from a generalizable model?

    ReplyDelete
  11. Parker-Oliver 2006
    This paper talks about using social work informatics to solve the problems of technology-based social work. It is understandable that people might have a lot of challenges when using new technologies a decade ago. I am curious about the informatics. In the last class, we learned the concept map of data science, domains/business knowledge, machine learning, etc. So what's the relationship between informatics and those concepts?

    Elswick et al (2015)
    I am thinking of whether it is appropriate to only consider the frequency of the problem behaviors. When students conduct talk out behaviors (TO) and out of seat (OS) behaviors, the duration may be different. Furthermore, even though there are two methods of collecting data: technology-based data collection and hand collected data, they are both collected by the teachers or researcher. What if the researchers and teachers had a potential bias that technology-based method is better?

    Sekhon-causality
    The paper lists models for causal inference. Science cares about causality. If we get data from a website or platform, it should be observational data. We have no previous hypothesis and know little about the relationships between the variables. We want to use data mining techniques to explore the data. Is it still data science?

    Introduction to statistical learning
    For the k-Fold Cross-Validation and LOOCV, they all make the k-1 or n-1 part to be the training set, and the remaining one to be the validation set. What is the benefit of doing so? If they have larger proportion to be the validation set, what will be different?

    ReplyDelete

Post a Comment

Popular posts from this blog

Readings for 9/26/18