• Home
  • About
  • Contact
  • Research
  • Teaching
  • Blog
  • Placement data

Soaked feet, bone-dry cuffs

Mind & psychology. A little bit of epistemology. Plenty of other stuff. Lots of swearing. Some jokes, mostly self-deprecating. Everything's coming up Milhouse.

"Leiter Year in Review" Review

1/27/2020

 
"So Charlie, why are you doing this given your stack of grading and journal deadlines?" you might be wondering. Well get off my back! You're not my real mom!

I have an abiding interest in the culture of professional philosophy. A lot of that culture is moving online, which is handy for folks like me who know enough about coding to get themselves into trouble and just enough to get themselves out of it (most of the time). LR boasts on his blog that it's the "world's most popular philosophy blog since 2003". (Maybe that's true? IDK if he's got the stats for Daily Nous. But of course "since 2003" might describe the blog's birth year and not how long it's been most popular.) It's influential in our profession; I have no doubt that many people read it. I also know that at least some people have VERY STRONG OPINIONS about LR and BL himself.

Given how popular it is, it's worth getting the big picture of the blog. If we wanted to describe the last year of blogging in a nutshell, we might say that LR is a lot like Slate or National Review: lots of good and important news interspersed with a lot of editorializing. How do we come to this conclusion? There is a lot of news about the profession and academia in general shared; there is also quite a bit of news about stifling academic freedom and rights to free speech; and more info on the PGR and job search advice. But along with all that are opinions expressed about Weinberg, Manne, Jenkins, Ichikawa, and the Twitter Red Guard. (I will note that there are no tags for Christa Peterson or Nathan Oseroff-Spicer, both of whom get mentions in the content of posts. BL tags tenured faculty at institutions with PhD programs, but not grad students. I, for one, can appreciate that. Maybe I'm reading too much into it, but it seems a bit more punching sideways than down, at least with respect to tags on LR.)

Leiter's Year in Review doesn't reflect his overall blogging patterns for the year (not that it has to). If I had to speculate, the posts picked are the most sensational ones and not the ones that reflect the blogging practices. You might get a more representative view of the blog by following the "Phil in the News" tag and then skimming some entries at random. 

Finally, maybe you're interesting in our three areas of investigation for the whole year? Here ya go.
Picture
Picture
Picture

What is the PGR supposed to measure? Part 2

12/18/2019

 
Previously I argued that the PGR is not a good measure of likelihood of job placement. I came across this post from Leiter Reports which suggests that the PGR isn't just for placements simpliciter but good placements, where "good" means something like "a high-quality, PhD-granting institution." How do we determine if one good placement is better than another? By the hiring-institution's ranking on the PGR.

I don't find this argument -- that PGR is a good measure of good job placement and not just job placement -- terribly interesting. First, there are lots of reasons why someone wouldn't want to work at a high-PGR school. I like my Jesuit SLAC because we focus on teaching and I have a lot of support from the admin to try new ideas, both in teaching and research. And nobody is anal about the job: most of us pursue some kind of work-life balance and we have the support of our colleagues in pursuing it. 

Second, HAVE YOU SEEN THE GODDAMN JOB MARKET LATELY??? You might lovingly call it a "shit-show" but that's kind of an insult to shit-shows. My sample size is small and biased, but the jaw of every non-philosopher academic I've talked to drops when I tell them that it's normal to get 300-500 applications for an open/open position. While nobody wants a toxic work environment (which, I'm guessing occurs at all levels of professional philosophy), I think many folks consider themselves lucky to find a job in academia. So using the PGR as a way of predicting good job placements is out of step with lived realities of finding an academic job. The market has driven us from wanting good jobs to wanting merely jobs. We've got loans to pay and mouths to feed.

That all brings me to today's subject: perhaps the PGR is good at predicting the quality of philosophy one hopes to do. On average, the suggestion goes, products of higher-ranking PGR programs do better philosophy than products at lower-ranking PGR programs.

Now if you're thinking to yourself, "hey not-so-smart guy, you went to Fordham, which isn't very highly ranked, so clearly you've got an axe to grind!" lemme stop you right there: I have very little love for my alma mater for a wide variety of reasons. I have no desire whatsoever to budge Fordham's spot in the PGR.

Back to the matter at hand. First, the biggest issue: if we're going to say that PGR ranking predicts quality of philosophy that is likely to be done by graduates, we need some metric of quality of philosophy. As far as I know, the only one offered is the same for smut: you know it when you see it. So let's run with this for a second. We know good philosophy when we see it. Presumably this means something like, "in reading certain kinds of philosophy, we experience X," where 'X' is short for a set of positive of thoughts and feelings. And I'm sure we've all had this experience. Reading William James got me into philosophy in the first place, and I've gotten that feeling reading (in no particular order) Aristotle, Wittgenstein, Andy Clark, Susan Stebbing, Alva Noe, Jenny Saul, Alvin Goldman, Mary Midgley and Richard Menary, among many others.

The PGR, then, predicts the likelihood that graduates of more highly-ranked programs will write material that enables you to experience X than graduates of lower-ranked programs, provided your tastes are like those of the raters. 

On this view, the PGR works kind of like a ranking of vineyards or (what I'm more familiar with) breweries. Beer-ranking experts might say that products of brewery A are superior to products of brewery B. On this view, the best way to think about the PGR is as a taste-guide for consumers of philosophy: experts agree that the philosophy of mind coming out of NYU's grads is superior to that coming out of Stanford's grads.

Fortunately, there's a relatively easy way to see if the PGR makes good predictions on this score. (Relative in principle, at least; we don't need anything like Twin Earth or LaPlace's Demon.) And this is a study that the APA should definitely fund. Pick some number of grads from schools at every tier of the PGR. (I think you could do this with the general ranking but it might work better with the specialties.) Commission them to write a short (~3k) paper of their choosing, but prepare it like they would peer-review: absolutely no identifying information. (In exchange, perhaps these papers could come out in a special issue of the Journal of the APA, or somehow of other compensate authors for their time.) Then, give these papers to other philosophers and have them guess the tier of school the author comes from. We can instruct raters that they are to pay attention to the "you know it when you see it" criterion for quality philosophy, since that's what we agreed to at the start of this investigation. If it turns out that raters are pretty accurate, then we might regard the PGR as a reliable guide to the quality of philosophy their grads produce. 

Now we began this by suggesting that "good philosophy" produces a kind of feeling. But what if that's the wrong metric? Maybe we rank "good philosophy" by some weighting of publications, citations, awards, grants, whatever. On this suggestion, higher-ranked PGR programs produce grads who produce more papers at higher-ranked venues that are more-often cited (and so on) and also win more grants than lower-ranked PGR programs. I think I have some beef with that definition of "good philosophy" but let's run with it for now. That's a (relatively) simple task that can be done by gathering publicly available data. (At least, I think most of it is publicly available.) So then we just need someone with the patience, financial support, and data analysis chops to gather publication info, citation rates, and grant-winner info (prob from NEH, NSF, and Templeton among others). These can be given weights, or a range of weights, and then we can see if the PGR makes good predictions.

Either way, the key is to remember: 
1. the PGR is a survey expressing preferences,
2. if the survey data is to be useful, it has to make predictions about grads, and
3. these predictions are testable.


Pro-tip for early-career philosophers... Update: Now with directions!

12/12/2019

 
I told my friend Joe Vukov about using Web of Knowledge to figure out which journals are publishing in which areas. (E.g. which journal has been publishing the greatest volume of work on the epistemology of disagreement or on Stoic ethics.) He said it was a great idea to share with grad students, so I'm providing a more fine-grained account of the steps to discover who's publishing in what topics. If you happen to be reading this a year (or more) after I've written it, then the number of returns you get may be different from mine. So take the numbers here as indicating a snapshot of research volume on 4E cognition at a particular moment in time.

The instructions here suppose that you're following along at home. I suppose I could have used screenshots but I have two monitors it's kind of a pain in the tuckus to get all and only what I want, hence the egocentric directions. 

Does your school have a subscription? I found mine by calling up the library, and they told me to look 'databases' tab on the library webpage. Lo and behold, there it was.  Create a login and username for yourself.

I prefer to use the advanced search since it has more tools at your disposal but I got started by playing with the basic search function. In a nutshell: you put in your search parameters, and the site finds every citation fitting those parameters since 1965. Not up to speed on your Boolean operators? They have a tutorial. (You find it by going to 'advanced search.' You'll see a few links telling you where to go.) I go a little overboard with the parentheses only because I can never remember what operators take priority and I'm too lazy to look it up every time. 

I wanted to find out where folks have been publishing on 4E cognition in the last 3 years. So I used the following search term:

TS=("extended mind*" OR "extended cognition" OR "enactive mind*" OR "enactive cognition" OR "enactivism" OR "embodied mind*" OR "embodied cognition" OR "embedded mind*" OR "situated mind*" OR "embedded cognition" OR "situated cognition")

'TS' is topic; the asterisks capture any string with at least the part before the asterisks -- so the initial string plus anything else attached to it. E.g. 'mind*' will catch both 'mind' and 'minds'. 

This returned 3,828 results. If you select, under the left-hand menu at "Web of Science Categories", you'll see that it covers psych and philosophy, but also literature, religion, management, sports science, and a bunch of other things. I only want the psych (the 1st, 2nd, and 5th in the list for me) and phil (3rd in the list) results, so I'll pick those and hit "refine". Now we're down to 2,035 citations. I also only want stuff published in the last 3 years, so I'll select what years I want under "Publication Years" in the left panel. This gets me 624 citation.

First, can we appreciate that 624 items have been published on 4E cognition between January 2017 and December 2019 in philosophy and psychology alone? Holy shit.

Ok, back to the task: go to "Analyze Results" (towards the top right) and you'll get a treemap of citations by topic. (I changed 'number of results' to 25 because I wanted the as many top results as possible, and 25 is the limit.) Here's what that looks like.
Picture
What the hell? I refined for 'philosophy' and 'psychology' but 'sports sciences' still made it into the analysis! I can't find anything on it on the website, but I'm assuming that if a citation is tagged as both A and B, then refining for tag A means that tag B comes along too. So tags are excluded as long as they're never conjoined with the filtered-for tags but also if they're specifically excluded (which is another option when refining). That's all fine. It doesn't affect what we're doing here: we want to know which journals are publishing 4E work and that doesn't depend on whether Web of Science's categories are mutually exclusive or how the filters work at this level. It may be important if you want to do other analyses.

Now to get journal titles, select "Source Titles" in the left-hand menu, and you'll get the following visualization.
Picture
This tells me the top three journals publishing the greatest volume of 4E papers are Frontiers, PCS, and Synthese. Also, I had no idea that the Italian Journal of Cognitive Sciences has published more papers on 4E cognition than Cognitive Science. That'll teach me to pay attention to only English-language journals. (In case you're curious, it publishes papers in both English and Italian.)

Keep in mind some limitations:

1. It doesn't tell me if the work is critical (or not)
2. There's no information about special issues, which could inflate the numbers
3. The numbers aren't relative to the total volume of work the journal publishes

Nonetheless, the visualization gives me a good sense for editors who are less likely to give 4E papers a desk reject because they don't fit with the journal's recent trends. Neato, right?
Now, suppose you want to expand your search for journals publishing 4E stuff since the 1960s. One thing to remember is that every time you refine your search, Web of Science counts it as a new search. So if you go to "Search History" you should be able to find your previous searches. I want to see historically who's been the greatest publisher of 4E work, so I select my search that refined for category but not year. Here's what that looks like:
Picture
I think there are some neat observations here. Frontiers and PCS are still the top 2. Phil Psych moves from 5th to 3rd. Synthese goes from 3rd to 10th -- suggesting they got into the 4E game more recently. Same thing for Adaptive Behavior. And Analysis doesn't make the top 25, despite publishing Clark and Chalmer's 1998 paper "The Extended Mind"! 

One thing to keep in mind is that some journals in the list are generalist ones (Synthese, Cognitive Science) and others are specialists (Philosophical Psychology, PCS) so you'd expect specialists to have more 4E papers than generalist ones. Another thing to keep in mind here is when the journal was founded. Older journals have a leg up on newer ones in several ways, but Frontiers is pretty darned new (definitely newer than Synthese) soooo.....
But enough of my weird interest in the shifting sands that is the culture of professional philosophy. We've got here an easy way to learn who's publishing what. Any questions or comments, please feel free to write! 

DN Twitter, 4/n

9/17/2019

 
Some stuff just for funsies...

In case you're wondering, there's no correlation between tweet length and favorites:
Picture
Also, the average length of DN tweets have gotten longer in the last 4 years, with a decided uptick in April 2018. I thought that this might have been the result of Twitter increasing the cap on tweet length from 140 to 280 but that happened in November 2017. (Apologies for the crammed x-axis labels. Click on the figure to zoom in.)
Picture
What about number of likes versus number of retweets?
Picture
Ok that's weird. What if we zoom in on those values that are fewer than 500?
Picture
So that initial weirdness indicates something kinda neat: if a tweet garners loads of "likes", it's not going to be retweeted. If it gets tons of retweets, it's not going to get many "likes." But maybe there's something to the year in which the tweet was produced?
Picture
Nope nothing there either. So, weirdly, DN tweets, are either heavily liked or heavily retweeted but not both.

DN Twitter 3/n

9/10/2019

 
Another quick post. Here's a plot of the top 10 partnerings by year. (Recall that R returns all values in cases of ties... hence why there are 13 accounts listed in the 2016 graph. Oh and if anyone knows how to order the x-axis in ggplot2 while employing facets, I would be eternally grateful for pointers. I tried everything I could think of and Google and nothing worked.)
Picture
Some interesting things to see here. First, DN is partners in 2019 more frequently with philosophers of sex and gender (@Docstockk, @christapeterso, @rachelvmckinnon) -- or at least folks vocal in the debate online -- than in previous years. Second, among the top 10, with the exception of @sciam and @michaelshermer in 2018, there's no clear preference for one over another. I.e. within the top 10 accounts with whom DN partners, there's no one that is head and shoulders parterned with more than another. Third, no account is in the top 10 from year to year. The DN account doesn't consistently partner with one account over multiple years. 
One last plot: let's look at all partnerings for DN over the same stretch of time. Names of accounts are not included because they're unreadable. But it's still pretty neat to see the shape of each plot:
Picture
Broken out by year, there's not the kind of long tail that we saw in the 1st post (except in 2018). But what is clear is that the DN account, over the last 4 years, engages with more accounts (2016 had 89 partners, 2017 had 162, 2018 had 221, and 2019 has 189 -- though keep in mind that the 2016 counts only go back to March because of query limits on Twitter's API). But there's still a subset of accounts each year that get partnered with more frequently than others, even though the most-partnered accounts change from year to year.

So there you are. We've looked at the folks the DN account responds to or mentions most over the last 4-ish years. 

DN Twitter 2/n

9/9/2019

 
This is going to be pretty short. I was curious if there was any correlation between the proportion of partnerings for accounts in the DN top 25 (see previous post) and the number of followers that they have. My hunch was that more followers mean more popular and thus more likely to get mentioned or tweeted at by the DN account. Again, a rich-get-richer kind of a thing. Here's the full top 25 (which actually comes to 32, since R includes multiple values in cases of ties).
Picture
It's a bit tough to read but you can see in the top left corner @sciam with loads of followers and lots of the proportion of partnerings and everyone else clustered on the left. Let's get rid of @sciam to see if that clears things up.
Picture
Ok the labels obscure things a bit. Once more without them.
Picture
Aaaaaaand one more adding a regression line...
Picture
The message is clear: there's little-to-no correlation between the ratio of partnerings and number of followers. My hunch was off the mark. 

    Author

    I do mind and epistemology and have an irrational interest in data analysis and agent-based modeling. 

    Archives

    December 2021
    October 2021
    May 2021
    April 2021
    November 2020
    March 2020
    January 2020
    December 2019
    September 2019
    August 2019

    Categories

    All
    But They're Not So Big
    Data
    Empty Promises
    Epistemology
    Introductions
    Music
    Philoso Twitter
    Philoso-twitter
    Professional Navel Gazing
    Professional Navel-gazing
    Psychology
    Rhetorical?

    RSS Feed

Powered by Create your own unique website with customizable templates.