J Hughes here: Thank you for these amendments. I confess to not following the longtermism field the way I do others like transhumanism. We were arguing that these fields of investigation are very diverse, and you are confirming that. We were also handwaving at some classic problems in consequentialism that we know are addressed in great detail, and if those issues are only relevant for a minority of hard longtermists I'm delighted and relieved to hear it.
On the long term utility of greater equality and utility today we were trying to suggest that while we weight near term benefits of social change more heavily than we think "the field" does, there are also long term rationales connecting social change now to longterm catastrophic risks. For instance my eye is always on the prospects for effective transnational institutions; what geopolitical changes are necessary to make an IAEA for AI a reality?
I guess we're also reflecting a skepticism about getting over our skis on projecting what a future society would be like. Marx famously said he declined to write "recipes for the cook shops of the future"; deciding how to do stuff in 2100 is their responsibility. We accept the likelihood of a diversification of types of beings and values so that current categories of utility may be irrelevant (e.g. Bostrom's Whimper x-risk). Add in singularitarian concerns about immanent unpredictability, and it sounds like we end up someplace near cautious, soft longtermism.