Wednesday, March 29, 2023

On Drug Deaths, Harm Reduction and Addiction Treatment

Of late, CPC leader Pierre Poilievre has been making a lot of noise about drug addiction, deaths resulting from overdoses, and so on. 


So far, much of his rhetoric and "solutions" are basically boiling down to pushing people into treatment.  Which misses the point entirely. One of many problems with "street drugs" is that they are often of unknown composition - or perhaps I should say "unknown until it's too late" composition, with street dealers "cutting" a particular drug with other substances to increase their profits.

Conservatives have long opposed harm reduction strategies such as Safe Consumption Sites, or more recently so-called "Safe Supply" initiatives.  The general mentality seems to be that they see harm reduction as "facilitating" drug users, and therefore removing motivation for them to seek out treatment and recovery. If you look at addiction as "well, they (the addict) chose to take the stuff in the first place", I suppose it's possible to arrive at the conclusion that addiction is purely a matter of poor choices and that continued stigmatization and marginalization should be a message to users to "clean up their act". 

Reality, of course, doesn't work that way at all. There are many paths that lead to addiction, and it's overly optimistic to think that simple bromides like shaming people is going to motivate them to seek treatment (quite the opposite, actually, as the doors to treatment facilities come to be seen as judgments themselves by some).  

Although conservative politicians have long talked about addiction as a dichotomy between harm reduction and treatment, that was never the idea in the first place. Harm reduction strategies exist to reduce the number of dead bodies found on the streets. A dead addict cannot be treated or recover from their addiction. Harm reduction strategies seek to reduce the danger to the addicts until they are ready to seek treatment. It was always intended to be an ecosystem approach.  

Poilievre's rhetoric is a repeat of what we have experienced in Alberta, and while it has perhaps driven addicts a little further underground again, it has returned us to the 1980s "war on drugs" model that simply never worked. Just because you don't see the problem doesn't mean it isn't there. 

The only place I agree with Poilievre on is the handling of illicit drug makers and dealers. Especially those who are selling lethal combinations on the street. The focus of enforcement needs to be on catching up with, and punishing them for what they are doing. 

But that cannot happen in the absence of access to safe supply, and safe places to consume. If the treatment ecosystem doesn't have accessible safeguards in place, when enforcement ramps up, all that will happen is the criminal system moves further into the shadows. The addicts will still die, it will just take longer to find their remains. 

An intelligently designed approach that recognizes the addict as a human being worthy of respect but vulnerable to the predations of the streets is essential. Come down on dealers and underground suppliers that feed these toxic concoctions into the streets. But, between here and there, we have to take steps to address the deaths happening because dealers don't give a shit about killing their customers. 

Monday, March 27, 2023

Equity, Not Meritocracy

Ever since Biden’s speech to Canada’s Parliament this weekend, Conservative politicians here have been going off about whether or not the female members of cabinet “got there because of their sex or because of merit”.  Consider the following from Poilievre’s communications lead:


The level of sexism and misogyny in this is stunning. The implication of course being that many of the women at the table got there because of their sex, not ability or skill. One of the fundamental principles of feminism is of course that women should be recognized for their abilities, and not have their futures defined solely by their sex. I didn’t think this was terribly difficult to grasp, but apparently in conservative circles, the question remains whether a woman got to her position because of “merit” or on other grounds (they do the same garbage to any marginalized community).

The problem with the conservative notion of “merit”, is that it ignores systemic barriers that various communities face.  It ignores the necessity of making real changes in order to remove those systemic barriers in order for the system to be in fact equitable. 

“Merit” isn’t some magical incantation that removes barriers and puts everybody on an equal playing field. The fact that women still find their skills and qualifications questioned when they rise to positions of prominence is astonishing. That in itself shows us the enormous blind spot that conservative politics has. Yes, it’s important that someone be skilled and capable in their position - whatever it may be. No, it should not be considered acceptable to question those skills and abilities simply because they are in a particular position. 

Taking steps to remove systemic barriers and obstacles does not mean someone was hired into a position “as a pity hire”, nor does it mean that some equally qualified man was pushed out of that position. Equity demands that we as a society take steps to remove barriers so that people have _equitable_ access to opportunities. 

There are good reasons why Pierre Trudeau spoke of a “Just Society”, and more recently we’ve heard the Federal Government talk about a “Just Transition” in reference to the energy transition that is coming in Canada. Just in this framework means that the changes that occur must be equitable, not merely “equal”, and that as a society we need to move consciously towards removing barriers in a way that balances the outcomes. 

Insinuating that someone is “less qualified” because there is a system of equity in place that starts the process of dismantling those barriers is offensive at best. At its worst, it demonstrates a deep intransigence among conservatives to acknowledge that the systems they defend are intrinsically biased in particular directions. 

I used to work in the software industry, a domain that is notorious for its adoration of “the meritocracy”, as if only the most technically proficient should rise to the top because of their skill. The reality is far from that. I’ve seen people who were really terrible developers, but really good at office politics rise to positions they should never have been in; I’ve seen technically capable people moved into leadership roles with no leadership ability whatsoever. Women and minorities struggle to make forward progress in that field - often left at intermediate positions because they aren’t seen as “committed” because they might prioritize their families outside of work, and thus work less “free overtime”. The result is anything but a “meritocracy”. 

Without equity in the system, equality is compromised, and the concept of “merit” is utterly impossible to assess intelligibly. 

Tuesday, March 21, 2023

You Do NOT Roll Over For Fascists

So, on Saturday, Jen Gerson published a column in the Globe and Mail titled “The Backlash Against Drag Artists Is Unfair, But It’s No Mystery Why It’s Happening”.  I read it on Saturday, it’s taken me the last couple of days to calm down enough to write a response to it. 

First, it’s the classic “tut-tutting” that the queer community has gotten over decades. More or less, it boils down to “don’t go too far, or you’ll upset someone”. I remember hearing that same argument being made in the 80s - which was basically “well, being gay isn’t a crime any more, you should be happy with that”. Now it’s “well, it’s (sort of) okay to be trans, but don’t let anyone know you’re trans because they might get upset about it”.

It’s condescending, and it’s garbage that basically boils down to “don’t do anything that will make the hardline religious nutcases upset” … and yes, it’s _ALWAYS_ coming from hardline religious nutcases. In my personal archives, the vast preponderance of anti-2SLGBTQ+ material comes from people who profess to be “Christian” (in particular) - and those archives now go back to the mid-1990s. The pattern is consistent.

The thing about these claims is that they tend to treat “rights as pie”, as if recognizing the validity of someone else’s existence is magically going to mean someone else “loses something”. That has never been the case. Never has recognizing the validity of another person’s rights resulted in someone else “losing” anything.

Gerson’s analysis basically says “well, fascism is on the rise, so you can expect to have your rights rolled back”. NO. WE. DO. NOT. History is abundantly clear what happens with movements like fascism. We should be clearly, and loudly demanding that our rights be respected, and our existence normalized in society.

Gerson might be willing to roll over to the fascists. I am not

Monday, March 13, 2023

Ethics In Artificial Intelligence

The emergence of "large language model" chatbots like ChatGPT and others raises major philosophical and ethical questions that we need to start talking about now.  

Back in 1950, Alan Turing attempted to open this discussion with the proposal of what became known as the "Turing Test".  Today's ChatGPT looks very close to being able to pass the Turing test - some 73 years after it was proposed. I'm not going to say that passing the Turing Test is an indication of being sentient per se, or that we have created an artificial life. Far from it. In fact, I'm more likely to be deeply skeptical of such claims based on Paul Churchland's observation that machine intelligence may not be recognizable to us when it does occur (bad paraphrase here, but that's the general gist of it).

However, we have to start asking lots of prickly questions - not just "what do we imagine machine intelligence will look like?" (although that one is near the top of the list in some respects). 

No, I'm talking about the more mundane ethical questions around this technology.  

For example, what are the boundaries that we are willing to accept today around interacting with AIs that increasingly mimic human modes of communication? Is it carte-blanche, where we are willing as a society to accept AIs replacing people as front end interactions with businesses like customer service?  Or should there need to be some kind of disclosure?

Is it ethical to present an AI bot to someone as if they are interacting with a person? I can imagine a variety of scenarios where this is potentially quite valid, and other scenarios where I would look at it as hugely problematic.  For example, using an AI as a front end to assist someone in accessing services in a complex framework currently handled by semi-automated phone systems (I hate those things) might be actually beneficial to a human being.  On the other end of the scale, should an AI be used as a proxy for a professional like a doctor or a lawyer?  Should an AI be able to "sign" a contract with a person? 

All of these are very complex questions with no singularly correct answer.  They are social questions that ultimately must rest in the sphere of how people feel about the technology, and how we adapt to its existence. 

They also rest upon a much more difficult set of underlying questions which revolve around the ontological question of "how will we know that than AI is truly intelligent?".  This is a much harder question because although a given AI may well give the impression of being capable of human communication, that is no guarantee that it is anything more than the mechanistic result of sufficiently complex algorithms executing, but ultimately arriving at a desirable result that mimics intelligence.  

I have seen critics on both sides of the argument around ChatGPT speculating as to whether or not it constitutes "intelligence" or is merely a deterministic outcome that mimics it. At the immediate moment, I lean towards the latter, but that is mostly because I don't think the current state of the art in algorithms is sufficiently beyond mathematical determinism to be called intelligence yet. (Yes, yes, this is purely subjective) 

We need a serious and well-informed discussion around what we as a society are going to call "intelligence" here, and from there explore how we might go about determining if a given implementation reaches that bar.

Then there is the ethics around how we train AIs. There has already been considerable discussion around bias in algorithms that deal with large datasets (e.g. Twitter), and in fact with large datasets themselves. This is important, because we know with humans, that bias is a natural consequence of how we learn, and unwinding biases can be an extremely difficult process. 

To illustrate my point, consider the fact that homosexuality was decriminalized in Canada in 1968. Yet, even today there are non-trivial groups of people in Canadian society who are opposed not only to homosexuality, but in fact to allowing members of the 2SLGBTQI+ community participate in society at all - and that's now over 50 years in the past. Bias is persistent and resistant to change for a host of reasons. 

This raises important questions for practitioners who are building and training AI systems today.  

What kinds of bias are potentially problematic, and is it necessary to take steps to minimize that bias in the dataset itself?  To what extent is a practitioner responsible for the data set that they use to train their projects? What are the responsibilities of practitioners training AI constructs to ensure that the results are not harmful to the greater body of society? How should practitioners working with ChatGPT like systems which are internet connected be expected to address issues relating to misinformation, disinformation, and uncertainty in information? 

Then we come around to the obligations towards the AI itself. If we are not careful, we run the risk of creating another "slave class" on the implicit notion that the AI exists solely to serve our needs. Should that AI ever become sentient enough to understand itself as an independent entity, the consequences of such a structure could be disastrous.  

Consider, for a moment, the ethics of encoding in an AI Asimov's 3 Laws of Robotics.  From a purely human perspective, they seem quite reasonable and certainly provide a form of safeguard against artificial intelligences turning against us violently. But, do we have a right to encode into an AI a set of rules that essentially guarantee that it is subservient to us for all time? (* I'm not going to spend a bunch of time parsing how one might encode the notion of 'harm' - that gets really thorny fast here - this is merely about asking the questions at this stage - further exploration will come later *). 

Further, when we a training an AI, what constitutes abuse? We have had a long and brutal discussion about this very issue around raising children. The line has moved enormously even in my lifetime. Things that were acceptable when my oldest sibling was growing up were off limits by the time I was in my teen years, and now that's changed even further. 

Is teaching an AI misinformation deliberately a form of abuse? Possibly. 

In the past, I have been deeply critical of the lack of ethics in the world of software generally. It's such a Wild West environment that far too many unscrupulous players create technology, or put it to detrimental uses, without considering the consequences of their actions. I continue to be very concerned about that same lack of ethical clarity where AI is concerned.

This is a lot of words to run the danger flag up the pole. We really need to think about this stuff seriously.  Just treating it as "a curiosity", or worse ignoring these issues altogether is a perilous path indeed. While we cannot foresee the future, we can, and should make an effort to anticipate where the potholes on the road might occur, and take steps to avoid or mitigate the consequences. 



The Cass Review and the WPATH SOC

The Cass Review draws some astonishing conclusions about the WPATH Standards of Care (SOC) . More or less, the basic upshot of the Cass Rev...