Here's the argument: "because researchers cherish their academic freedom, asking them to predict the future applications for research that could be in very early stages and years away from viability in products restricts that independence." I think they have a point. Not so much because of academic freedom issues, but because they're not anything like fully informed on ethics. That's why we have entire departments of philosophy at most universities. We don't want AI researchers (or, for that matter, learning designers or educational technologists) determining what is and isn't ethical. It is up to companies and institutions that fund this work to ensure that ethics are taken into consideration, just as is the case for any other research, using the appropriate research ethics boards.