Abstract: Customized Speech — speech targeted or tailored based on knowledge of one’s audience — is pervasive. It permeates our relationships, our culture, and, especially, our politics. Until recently, customization drew relatively little attention. Cambridge Analytica changed that. Since 2016, a consensus has decried Speech Customization as causing political manipulation, disunity, and destabilization. On this account, machine learning, social networks and Big Data make political Customized Speech a threat we constitutionally can, and normatively should, curtail. That view is mistaken. In this Article, I offer the first systematic analysis of Customized Speech and the First Amendment. I reach two provocative results: Doctrinally, the First Amendment robustly protects Speech Customization. And normatively, even amidst Big Data, this protection can help society and democracy. Doctrinally, the use of audience information to customize speech is, itself, core protected speech. Further, audience-information collection, while less protected, may still only be regulated by carefully drawn, content-neutral, generally applicable laws. And unless and until the state affirmatively enacts such laws (as, overwhelmingly, it has not), it may not curtail speakers’ otherwise-lawful use of such information in political Speech Customization. What does this mean for democratic government? Today, Customized Speech raises fears about democratic discourse, hyper-partisan factions, and citizen autonomy. But these are less daunting than the consensus suggests, and are offset by key benefits: modern Customized Speech activates the apathetic, empowers the marginalized, and checks government overreach. Accordingly, many current proposals to restrict such Customized Speech — from disclosure requirements to outright bans — are neither constitutionally viable nor normatively required.