The sentiment expressed in legislator’s speech in a chamber is informative (compared to roll call), in particular in a legislature with partisan discipline, but extracting legislators’ sentiment from their speech is time and resource consuming. For instance, studies may develop a dictionary that lists positive and negative sentiment words. For another, supervised machine learning may help, but requires data labeled by human coders. In both cases, constructing such dictionaries or labeled data is labor-intensive and could be subjective. To address this challenge, we propose a research design to exploit closing debates on a bill, where legislators label their speech by stating their position on a bill. We claim that with this set of debates as training data, analysts can accurately detect sentiment in speech through various classification methods. Furthermore, using paired debates on a bill from different positions, analysts can minimize the influence of topic-specific words irrelevant to the sentiment. We apply our method to the corpora of all speeches in the House of Representatives of Japan, 1955-2014. After constructing a final debate corpus, we train multiple supervised models and use them to scale other speeches. We show that the sentiments of multiple actors’ (e.g. cabinet ministers, government party members, and opposition party members) speeches line up in an expected order over time. Furthermore, we show that government backbenchers and opposition members get more polarized as the next election is approaching, although both sides come together towards the end of a legislative session.