tag:talks.cs.umd.edu,2005:/lists/19/feedSecurity Reading Group2024-03-28T07:20:36-04:00tag:talks.cs.umd.edu,2005:Talk/29352021-09-28T11:48:59-04:002021-09-28T11:48:59-04:00https://talks.cs.umd.edu/talks/2935"Why wouldn't someone think of democracy as a target?": Security practices & challenges of people involved with US political campaignsSunny Consolvo and Patrick Gage Kelley - Google<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 2, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p><span style="color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">People who are involved with political campaigns face increased digital security threats from well-funded, sophisticated attackers, especially nation-states. Improving political campaign security is a vital part of protecting democracy. To identify campaign security issues, we conducted qualitative research with 28 participants across the U.S. political spectrum to understand the digital security practices, challenges, and perceptions of people involved in campaigns. A main, overarching finding is that a unique combination of threats, constraints, and work culture lead people involved with political campaigns to use technologies from across platforms and domains in ways that leave them—and democracy—vulnerable to security attacks. Sensitive data was kept in a plethora of personal and work accounts, with ad hoc adoption of strong passwords, two-factor authentication, encryption, and access controls. No individual company, committee, organization, campaign, or academic institution can solve the identified problems on their own. To this end, we provide an initial understanding of this complex problem space and recommendations for how a diverse group of experts can begin working together to improve security for political campaigns.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/5672014-02-04T10:46:55-05:002014-03-02T11:07:31-05:00https://talks.cs.umd.edu/talks/567Intrusion recovery using selective re-execution (undo computing)<a href="http://pdos.csail.mit.edu/~taesoo/">Taesoo Kim - MIT CSAIL</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">4172 A.V. Williams Building (AVW)</a><br>Tuesday, March 11, 2014, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p class="p1">Virtually any computer system can be compromised. New software vulnerabilities are discovered and exploited daily, but even if the software is bug-free, administrators may inadvertently make mistakes in configuring permissions, or unaware users may click on buttons in application installers with little understanding of its consequences. Recovering from those inevitable compromises leads to days and weeks of wasted effort by users or system administrators, yet with no conclusive guarantee that all traces of the attack have been cleaned up. This talk will present our work on "undo computing," which aims to restore system integrity by efficiently and precisely detecting and undoing changes made by past intrusions.</p><br><b>Bio:</b> <p class="p1">Taesoo Kim is a PhD student at MIT CSAIL. He is interested in building systems that have strong yet intuitive underline principles for why it should be just secure. Those principles include the simple design of a system, analysis of its implementation, and clear separation of trusted components. He has BS at KAIST (2009), and SM at MIT (2011), both in CS.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/6572014-04-04T09:31:58-04:002014-04-04T09:33:07-04:00https://talks.cs.umd.edu/talks/657Secure Password StorageJohn Steven - Cigital, Inc.<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=CSI">1121 Computer Science Instructional Center (CSI)</a><br>Wednesday, April 16, 2014, 5:00-7:00 pm<br><br><b>Abstract:</b> <p>This talk will cover the fundamentals of secure password storage, specifically, how people typically store these credentials and what consequences follow from the common building blocks on which they rely. We cover properties of hashes, salts, and adaptive one-way functions in detail. How do each of these impact the security posture? We will consider strategies for storing passwords securely and reliably.</p>
<p>John will begin the talk by awarding prizes to the winners of January's Build-it, Break-to contest pilot (cf. https://builditbreakit.org), which Cigital sponsored.</p>
<p>Pizza and refreshments will be served.</p><br><b>Bio:</b> <p>John Steven is the Internal CTO at Cigital with over a decade of hands-on experience in software security. John’s expertise runs the gamut of software security from threat modeling and architectural risk analysis, through static analysis (with an emphasis on automation), to security testing. As a consultant, John has provided strategic direction as a trusted adviser to many multi-national corporations. John’s keen interest in automation keeps Cigital technology at the cutting edge. He has served as co-editor of the Building Security In department of IEEE Security & Privacy magazine, speaks with regularity at conferences and trade shows, and is the leader of the Northern Virginia OWASP chapter. John holds a B.S. in Computer Engineering and an M.S. in Computer Science both from Case Western Reserve University. Follow John on Twitter @m1splacedsoul.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/7512014-09-01T14:10:42-04:002014-09-01T14:10:42-04:00https://talks.cs.umd.edu/talks/751Parallel secure computation frameworkKartik Nayak<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, September 5, 2014, 1:00-2:00 pm<br><br><b>Abstract:</b> <p> </p>
<p style="margin: 0in 0in 0pt;"><span style="font-family: Times New Roman; font-size: medium;">Many machine learning algorithms can infer more than just the required model (age, sex, political affiliation, etc.). This problem can be solved using secure computation. However, secure computation is too slow in practice to crunch big data. Also, most algorithms make random accesses and making them trivially oblivious requires using ORAM which is not practical. We introduce a technique to efficiently reduce a large portion of graph based (which includes many machine learning) algorithms to its distributed oblivious version with minimal communication overhead due to parallelism. Due to the distributed nature of the algorithms, we can easily scale the execution by adding a large number of machines to mine a large amount of data.</span></p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/7762014-09-19T01:42:41-04:002014-09-29T11:19:49-04:00https://talks.cs.umd.edu/talks/776When Governments Hack Opponents: A Look at Actors and TechnologyBumJun Kwon<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, October 3, 2014, 1:00-2:00 pm<br><br><b>Abstract:</b> <p class="MsoNormal"><span style="font-family: Arial, sans-serif; color: #444444; font-size: small;">Repressive nation-states have long monitored telecommunications to keep tabs on political dissent. The Internet and online social networks, however, pose novel technical challenges to this practice, even as they open up new domains for surveillance. We analyze an extensive collection of suspicious files and links targeting activists, opposition members, and nongovernmental organizations in the Middle East over the past several years. We find that these artifacts reflect efforts to attack targets’ devices for the purposes of eavesdropping, stealing information, and/or unmasking anonymous users. We describe attack campaigns we have observed in Bahrain, Syria, and the United Arab Emirates, investigating attackers, tools, and techniques. In addition to off-the-shelf remote access trojans and the use of third-party IP-tracking services, we identify commercial spyware marketed exclusively to governments, including Gamma’s FinSpy and Hacking Team’s Remote Control System (RCS). We describe their use in Bahrain and the UAE, and map out the potential broader scope of this activity by conducting global scans of the corresponding command-and-control (C&C) servers. Finally, we frame the real-world consequences of these campaigns via strong circumstantial evidence linking hacking to arrests, interrogations, and imprisonment.</span></p>
<p> </p>
<p class="MsoNormal"> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/7772014-09-19T14:17:21-04:002014-09-19T14:18:08-04:00https://talks.cs.umd.edu/talks/777ALITHEIA: Towards Practical Verifiable Graph Processing<a href="http://terpconnect.umd.edu/~zhangyp/">Yupeng Zhang - MC2</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, October 17, 2014, 1:00-2:00 pm<br><br><b>Abstract:</b> <p>We consider a scenario in which a data owner outsources storageof a large graph to an untrusted server; the server performs computations on this graph in response to queries from a client (whether the data owner or others), and the goal is to ensure verifiability of the returned results. Existing work on verifiable computation (VC) would compile each graph computation to a circuit or a RAM program and then use generic techniques to produce a cryptographic proof of correctness for the result. Unfortunately, such an approach will incur large overhead, especially in the proof-computation time.</p>
<p>In this work we address the above by designing, building, and evaluating ALITHEIA, a nearly practical VC system tailored for graph queries such as computing shortest paths, longest paths, and maximum flow. The underlying principle of ALITHEIA is to minimize the use of generic VC systems by leveraging various algorithmic techniques specifically for graphs. This leads to both theoretical and practical improvements. Asymptotically, it improves the complexity of proof computation by at least a logarithmic factor. On the practical side, we show that ALITHEIA achieves significant performance improvements over current state-of-the-art (up to a 108× improvement in proof-computation time, and a 99.9% reduction in server storage), while scaling to 200,000-node graphs.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/8262014-10-29T18:12:59-04:002014-11-03T17:32:04-05:00https://talks.cs.umd.edu/talks/826Efficiency/Security Tradeoffs for Secure Two-party ComputationPayman Mohassel - Yahoo Labs<br><br>Wednesday, November 12, 2014, 11:00-11:59 am<br><br><b>Abstract:</b> <div>The applications we use every day deal with privacy-sensitive data that come from different sources and entities, hence creating a tension between more functionality and privacy. Secure Multiparty Computation (MPC), a fundamental problem in cryptography, tries to resolve this tension.</div>
<div> </div>
<div>A promising direction for making MPC practical is to consider realistic relaxations in security in exchange for better efficiency. I will focus on trading-off information leakage for better efficiency in the two-party setting. I start with a simple and efficient construction with security against malicious cheating that leaks an adversarially-chosen predicate of honest party's input. Then I show how to improve it by restricting the leakage in two orthogonal ways: (i) limiting leakage to a natural notion of ``only computation leaks", and (ii) reducing probability of leakage using a tunable security parameter.</div><br><b>Bio:</b> <p><span style="font-family: HelveticaNeue; color: black; font-size: small;">Payman Mohassel is currently a Research Scientist at Yahoo Labs, Sunnyvale. He obtained his Ph.D in computer science at University of California, Davis in 2009, and subsequently worked as a faculty member in the Department of Computer Science at the University of Calgary. His research is in cryptography and information security with a focus on bridging the gap between the theory and practice of privacy-preserving computation.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/8852015-01-30T11:41:12-05:002015-03-23T15:43:44-04:00https://talks.cs.umd.edu/talks/885Secure Multiparty Computations on Bitcoin<a href="http://www.cs.umd.edu/~liuchang/">Chang Liu - MC2</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 3, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="font-family: 'times new roman', times; font-size: medium;"><span style="font-family: 'times new roman', times; font-size: medium;">Andrychowicz, Marcin, et al. "Secure multiparty computations on bitcoin."</span><em style="color: #222222; font-family: Arial, sans-serif; font-size: 13px; line-height: 16.1200008392334px;">Security and Privacy (SP), 2014 IEEE Symposium on</em><span style="font-family: 'times new roman', times; font-size: medium;">. IEEE, 2014.</span></span></p>
<p><span style="font-family: 'times new roman', times; font-size: medium;">Bit coin is a decentralized digital currency, introduced in 2008, that has recently gained noticeable popularity. Its main features are: (a) it lacks a central authority that controls the transactions, (b) the list of transactions is publicly available, and (c) its syntax allows more advanced transactions than simply transferring the money. The goal of this paper is to show how these properties of Bit coin can be used in the area of secure multiparty computation protocols (MPCs). Firstly, we show that the Bit coin system provides an attractive way to construct a version of "timed commitments", where the committer has to reveal his secret within a certain time frame, or to pay a fine. This, in turn, can be used to obtain fairness in some multiparty protocols. Secondly, we introduce a concept of multiparty protocols that work "directly on Bit coin". Recall that the standard definition of the MPCs guarantees only that the protocol "emulates the trusted third party". Hence ensuring that the inputs are correct, and the outcome is respected is beyond the scope of the definition. Our observation is that the Bit coin system can be used to go beyond the standard "emulation-based" definition, by constructing protocols that link their inputs and the outputs with the real Bit coin transactions. As an instantiation of this idea we construct protocols for secure multiparty lotteries using the Bit coin currency, without relying on a trusted authority (one of these protocols uses the Bit coin-based timed commitments mentioned above). Our protocols guarantee fairness for the honest parties no matter how the loser behaves. For example: if one party interrupts the protocol then her money is transferred to the honest participants. Our protocols are practical (to demonstrate it we performed their transactions in the actual Bit coin system), and can be used in real life as a replacement for the online gambling sites. We think that this paradigm can have also other applications. We discuss some of them.</span></p><br><b>Bio:</b> <p><span style="font-family: 'Times New Roman'; font-size: medium;">Chang is currently a Third-Year Doctorate student working with </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.cs.umd.edu/~elaine/">Prof. Elaine Shi</a><span style="font-family: 'Times New Roman'; font-size: medium;">, </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.cs.umd.edu/~mwh/">Prof. Michael Hicks </a><span style="font-family: 'Times New Roman'; font-size: medium;">, and </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.cs.umd.edu/~bobby/">Prof. Bobby Bhattacharjee</a><span style="font-family: 'Times New Roman'; font-size: medium;"> in the </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.cs.umd.edu/">Department of Computer Science</a><span style="font-family: 'Times New Roman'; font-size: medium;"> at the </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.umd.edu/">University of Maryland</a><span style="font-family: 'Times New Roman'; font-size: medium;">, College Park.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/8912015-02-03T11:00:54-05:002015-02-04T10:27:43-05:00https://talks.cs.umd.edu/talks/891Cyber War, Cyber Peace, Stones, and Glass Houses<a href="http://www.cigital.com/~gem/">Gary McGraw, Ph.D. - CTO, Cigital</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">2460 A.V. Williams Building (AVW)</a><br>Monday, March 9, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p></p>
<p></p>
<p style="margin: 0in; margin-bottom: .0001pt;"><span style="font-size: 10.5pt; font-family: 'Calibri','sans-serif'; color: black;">Cyber War, Cyber Peace, Stones, and Glass Houses</span></p>
<p style="margin: 0in; margin-bottom: .0001pt;"><span style="font-size: 10.5pt; font-family: 'Calibri','sans-serif'; color: black;">Washington has become transfixed by cyber security and with good reason. Cyber threats cost Americans billions of dollars each year and put U.S. troops at risk. Yet, too much of the discussion about cyber security is ill informed, and even sophisticated policymakers struggle to sort hype from reality. As a result, Washington focuses on many of the wrong things. Offense overshadows defense. National security concerns dominate the discussion even though most costs of insecurity are borne by civilians. Meanwhile, effective but technical measures like security engineering and building secure software are overlooked. In my view, cyber security policy must focus on solving the software security problem – fixing the broken stuff. We must refocus our energy on addressing the glass house problem instead of on building faster, more accurate stones to throw. </span></p>
<p style="margin: 0in; margin-bottom: .0001pt;"><span style="font-size: 10.5pt; font-family: 'Calibri','sans-serif'; color: black;"><em>Lunch will be served after the talk.</em> </span></p><br><b>Bio:</b> <p></p>
<p></p>
<p style="margin: 0in; margin-bottom: .0001pt;"><span style="font-size: 10.5pt; font-family: 'Calibri','sans-serif'; color: black;">Gary McGraw is the CTO of Cigital, Inc., a software security consulting firm with headquarters in the Washington, D.C. area and thirteen offices throughout the world. He is a globally recognized authority on software security and the author of eight best selling books on this topic. His titles include Software Security, Exploiting Software, Building Secure Software, Java Security, Exploiting Online Games, and 6 other books; and he is editor of the Addison-Wesley Software Security series. Dr. McGraw has also written over 100 peer-reviewed scientific publications, authors a monthly security column for SearchSecurity and Information Security Magazine, and is frequently quoted in the press. Besides serving as a strategic counselor for top business and IT executives, Gary is on the Advisory Boards of Dasient (acquired by Twitter), Fortify Software (acquired by HP), Raven White, Max Financial, Invotas, and Wall+Main. His dual PhD is in Cognitive Science and Computer Science from Indiana University where he serves on the Dean’s Advisory Council for the School of Informatics. Gary served on the IEEE Computer Society Board of Governors and produces the monthly Silver Bullet Security Podcast for IEEE Security & Privacy magazine (syndicated by SearchSecurity).</span></p>
<p class="MsoNormal"><span style="font-size: 10.5pt; font-family: 'Calibri','sans-serif'; color: black;"> </span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9062015-02-09T15:04:40-05:002015-02-11T10:04:44-05:00https://talks.cs.umd.edu/talks/906Cuckoo Cycle: a memory bound graph-theoretic proof-of-work<a href="https://zikaiwen.wordpress.com/">Zikai Wen</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 13, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>We introduce the first graph-theoretic proof-of-work system, based on finding small cycles or other structures in large random graphs. Such problems are trivially verifiable and arbitrarily scalable, presumably requiring memory linear in graph size to solve efficiently. Our cycle finding algorithm uses one bit per edge, and up to one bit per node. Runtime is linear in graph size and dominated by random access latency, ideal properties for a memory bound proof-of-work. We exhibit two alternative algorithms that allow for a memory-time trade-off (TMTO)—decreased memory usage, by a factor k, coupled with increased runtime, by a factor ?(k). The constant implied in ?() gives a notion of memory-hardness, which is shown to be dependent on cycle length, guiding the latter’s choice. Our algorithms are shown to parallelize reasonably well. </p>
<p><span style="font-size: small;">Lunch will be served after the talk, please register if you plan to attend.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9142015-02-17T14:18:12-05:002015-03-09T15:31:25-04:00https://talks.cs.umd.edu/talks/914Cyber Intelligence: A Discipline Not a Data FeedJim Penrose - Darktrace<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">2460 A.V. Williams Building (AVW)</a><br>Friday, March 13, 2015, 11:00-11:59 am<br><br><b>Abstract:</b> <p>This talk will make a thought-provoking invitation to academia for a new approach to threat detection based on augmenting the existing security stack to provide intelligence to catch intruders before a crisis occurs. This talk will outline how cyber intelligence professionals can help companies evolve data management to focus on timely response action.</p><br><b>Bio:</b> <p>Jim Penrose is the EVP for Cyber Intelligence at <span class="il">Darktrace</span> where he leads the firm’s cyber operations team. A distinguished speaker, Jim has presented at the 2014 Cybersecurity Summit, the Gartner Security & Risk Management Summit, and the Suits and Spooks London 2014 meeting.<br> <br> Jim joined <span class="il">Darktrace</span> following a successful 17-year tenure at the NSA. There, he achieved the rank of Defense Intelligence Senior Level and was responsible for a variety of roles encompassing cyber threat analysis and counterterrorism. Most recently, as Chief of the Operational Discovery Center, Jim innovated new signal intelligence capabilities to discover previously unknown threats.<br> <br> Jim has been at the forefront of cyber operations throughout his career and was nominated for a Presidential Rank award in 2013. His security certifications include Graduate Certificate, Computer Security and Information Assurance, George Washington University.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9222015-02-20T00:39:12-05:002015-02-20T15:29:51-05:00https://talks.cs.umd.edu/talks/922Constant-Round MPC with Fairness and Guarantee of Output Delivery<a href="http://www.cs.umd.edu/~fenghao/">Feng-Hao Liu - UMD</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 27, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">We study the round complexity of multiparty computation with fairness and guaranteed output delivery, assuming existence of an honest majority. We demonstrate a new lower bound and a matching upper bound. Our lower bound rules out any two-round fair protocols in the standalone model, even when the parties are given access to a common reference string (CRS). The lower bound follows by a reduction to the impossibility result of virtual black box obfuscation of arbitrary circuits.<br><br>Then we demonstrate a three-round protocol with guarantee of output delivery, which in general is harder than achieving fairness (since the latter allows the adversary to force a fair abort). We develop a new construction of a threshold fully homomorphic encryption scheme, with a new property that we call “flexible” ciphertexts. Roughly, our threshold encryption scheme allows parties to adapt flexible ciphertexts to the public keys of the non-aborting parties, which provides a way of handling aborts without adding any communication.<br><br></div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">This is a joint work with Dov Gordon and Elaine Shi.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"> </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Lunch will be served after the talk. Please register.</div><br><b>Bio:</b> <p><span style="font-family: 'Times New Roman'; font-size: medium;">Fenghao is a postdoctoral research associate working at Maryland Cybersecurity Center, University of Maryland. My hosts are Professors </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.cs.umd.edu/~jkatz">Jonathan Katz</a><span style="font-family: 'Times New Roman'; font-size: medium;">, </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.cs.umd.edu/~elaine">Elaine Shi</a><span style="font-family: 'Times New Roman'; font-size: medium;"> and </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.ece.umd.edu/~danadach/">Dana Dachman-Soled</a><span style="font-family: 'Times New Roman'; font-size: medium;">. Before joining UMD, He received his PhD degree from Brown University under the supervision of Professor </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.cs.brown.edu/~anna">Anna Lysyanskaya</a><span style="font-family: 'Times New Roman'; font-size: medium;">. He received his bachelor degree from </span><a style="font-family: 'Times New Roman'; font-size: medium;" href="http://www.ee.ntu.edu.tw/en/">Department of Electrical Enginieering, National Taiwan University.</a><span style="font-family: 'Times New Roman'; font-size: medium;"> </span><br style="font-family: 'Times New Roman'; font-size: medium;"><br style="font-family: 'Times New Roman'; font-size: medium;"><span style="font-family: 'Times New Roman'; font-size: medium;">He is interested in information security with focus on cryptography. Particularly he tackles security challenges in various scenarios of cloud computing, and develop new techniques to handle these security issues, such as: how to compute on encrypted data, how to verify computation in outsourced environments, and how to protect memory/computation under physical attacks. </span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9342015-02-26T14:14:45-05:002015-03-25T15:59:46-04:00https://talks.cs.umd.edu/talks/934Computer-Aided Cryptography<a href="http://software.imdea.org/people/gilles.barthe/">Gilles Barthe - IMDEA Software Institute</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, March 27, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p style="text-align: left;">Computer-aided cryptography advocates the adoption of computer-aided proofs for certifying the security of cryptographic constructions. In this talk, I will outline our earlier work on developing relational program logics for reasoning about probabilistic computations with adversarial code, and present our more recent efforts to carry the security guarantees down to implementation level, using advances in verified compilation and verified static analyses. I will also outline some limitations of the approach.</p><br><b>Bio:</b> <p style="text-align: left;">Gilles Barthe received a Ph.D. in Mathematics from the University of Manchester, UK, in 1993, and an Habilitation à diriger les recherches in Computer Science from the University of Nice, France, in 2004. He joined the IMDEA Software Institute in April 2008. Previously, he was head of the Everest team on formal methods and security at INRIA Sophia-Antipolis Méditerranée, France. He also held positions at the University of Minho, Portugal; Chalmers University, Sweden; CWI, Netherlands; University of Nijmegen, Netherlands. He has published more than 100 refereed scientific papers. He has been coordinator/principal investigator of many national and European projects, and served as the scientific coordinator of the FP6 FET integrated project "MOBIUS: Mobility, Ubiquity and Security" for enabling proof-carrying code for Java on mobile devices (2005-2009). He has served as PC (co-)chair of VMCAI 2010, ESOP 2011, FAST 2011, SEFM 2011 and ESSOS 2012, and been a PC member of more than 70 conferences, including CCS, CSF, EUROCRYPT, ESORICS, FM, ICALP, LICS, and POPL. He is a member of the editorial board of the Journal of Automated Reasoning and of the Journal of Computer Security.</p>
<p style="text-align: left;">His research interests include programming languages and program verification, software and system security, cryptography, formal methods and foundations of mathematics and computer science. Since joining IMDEA, his research has focused on building foundations for computer-aided cryptography and privacy and on the development of tools for proving the security of cryptographic constructions and differentially private computations. He was awarded the Best Paper Award at CRYPTO 2011 and PPoPP 2013, and was an invited speaker at numerous venues, including CSF, ESORICS, ETAPS, FAST, ITP, QEST and SAS.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9372015-03-01T17:00:44-05:002015-03-01T17:00:44-05:00https://talks.cs.umd.edu/talks/937What Cryptocurrencies Can’t Do<a href="http://james.grimmelmann.net/">James Grimmelmann - University of Maryland</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=CSI">1122 Computer Science Instructional Center (CSI)</a><br>Friday, March 13, 2015, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>The technical benefits of blockchain and cryptocurrency technologies have often been linked to broader political claims that cryptocurrencies are inherently resistant to regulation. I will argue that the stronger forms of these claims are false; they rest on mistaken assumptions about the nature of law. The discretion and ambiguity built into law is a feature, not a bug: it is a crucial aspect of the interface between the crystalline world of software and the muddy reality of human affairs. The rules of Bitcoin derive ultimately from its users rather than from its protocols, and those users live in real places and depend on real governments. An obsessive focus on the double-spending problem obscures attention to the other work that offline payment and recordation systems do. I will discus what is and is not really new about Bitcoin with examples drawn from numerous fields of law.</p><br><b>Bio:</b> <p>James Grimmelmann is a Professor of Law at the University of Maryland Francis King Carey School of Law and a Visiting Professor at the University of Maryland Institute for Advanced Computer Studies. He previously taught at New York Law School and the Georgetown University Law Center. He holds a J.D. from Yale Law School and an A.B. in computer science from Harvard College. Prior to law school, he worked as a programmer for Microsoft. He has served as a Resident Fellow of the Information Society Project at Yale, and as a law clerk to the Honorable Maryanne Trump Barry of the United States Court of Appeals for the Third Circuit.</p>
<p> </p>
<p>He studies how laws regulating software affect freedom, wealth, and power. As a lawyer and technologist, he helps these two groups understand each other by writing about copyright, search engines, privacy, and other topics in computer and Internet law. He is the author of the casebook Internet Law: Cases and Problems, now in its fourth edition. Other significant publications include Speech Engines, 98 Minn. L. Rev. 868 (2014), Sealand, HavenCo, and the Rule of Law, 2012 U. Ill. L. Rev. 405, and Saving Facebook, 94 Iowa L. Rev. 1137 (2009). He is a Contributing Editor for Publishers Weekly; he and his students created the Public Index website to inform the public about the Google Books settlement.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9462015-03-05T20:57:36-05:002015-04-05T17:31:32-04:00https://talks.cs.umd.edu/talks/946Cryptocurrencies vs. real-world finance: Stability, liquidity, the ability to hedge, and the missing 99 percent...Jonathan Levi - ICME Stanford University<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">4172 A.V. Williams Building (AVW)</a><br>Friday, April 10, 2015, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">In this session, “Cryptocurrencies vs. real-world finance: Stability,</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">liquidity, the ability to hedge, and the missing 99 percent”, Levi</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">will highlight issues from three of his popular recent presentations,</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">"Small Errors in Big Data: White Noise or White Lies", "Practical</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Machine Learning: Theory, Practice, and the Challenges in Making These</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Two Meet", and "Theoretical Financial Mathematics Meets Real Data."</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">The presentation will focus on the related challenges that should be</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">considered in the context of financial quantitative analysis and</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">crypto-currencies.</span></p><br><b>Bio:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Bio + Abstract:</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Jonathan Levi is a pioneer and visionary in the field of scientific</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">computing and complex quantitative financial systems. Levi's work as a</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">quantitative strategist in the financial services industry in London &</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">New York where he worked for eight years for Standard and Poor's,</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Barclays Capital and Goldman Sachs on complex quantitative systems</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">revolutionized the field. Prior to that time, he worked for the</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Israeli Defense Forces on mission critical, military grade</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">cryptographic systems with zero error tolerance and Cisco Systems</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">(Network Management Technology Group) on FIPS certification of the</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">crypto code-base for the National Security Agency.</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">He is currently a researcher in Computational & Mathematical</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Engineering at Stanford University. Premised on the belief that no</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">single model can capture 100% of the complex attributes of live</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">securities markets - his current research focuses on analyzing large</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">scale financial market data empirically to build systems that perform</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">forensic analysis and learn about the intrinsic characteristics of</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">financial markets.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9612015-03-22T20:51:27-04:002015-03-30T12:27:47-04:00https://talks.cs.umd.edu/talks/961Why Bitcoin matters (to computer scientists)<a href="http://randomwalker.info/">Arvind Narayanan - Princeton</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">4172 A.V. Williams Building (AVW)</a><br>Friday, March 27, 2015, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">Behind the hype and tumult of the markets, researchers have</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">been quietly producing a series of exciting results about Bitcoin and</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">cryptocurrencies. In this paper we’ll explain why computer scientists</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">should pay attention to these developments.</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">First, every machine with a Bitcoin private key effectively serves as</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">a bug bounty that can be redeemed irreversibly and anonymously. These</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">strong monetary incentives for attackers have exposed the inadequacy</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">of current security practices and spurred new designs. These will</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">likely have lasting positive impacts on security overall. Second,</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">predicting the behavior of cryptocurrency participants has exposed</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">limitations of game theory and mechanism design. However, as a</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">real-world system that’s relatively "closed" and tractable, modeling</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">Bitcoin's stability is an ambitious yet feasible goal. Third, Bitcoin</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">has validated the concepts of secure global logs and globally</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">distributed consensus as primitives, with an array of applications</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">ranging from immediate, such as certificate transparency, to</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">speculative, such as decentralized prediction markets.</span></p><br><b>Bio:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">Dr. Narayanan is an assistant professor at Princeton's Center for</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">Information Technology Policy. He received the first NSF grant for</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">cryptocurrency research (link is external), and has taught a Bitcoin</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">and cryptocurrencies class for Princeton undergraduates (link is</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">external), with a MOOC and textbook forthcoming. His research in</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">cryptocurrency has included transaction privacy (link is external)and</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 13px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 13px;">prediction markets (link is external).</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9932015-04-08T10:43:52-04:002015-04-14T09:54:30-04:00https://talks.cs.umd.edu/talks/993Bitcoin and Cryptocurrencies: The Regulatory LandscapeJerry Brito and Peter Van Valkenburgh - Coin Center<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">4172 A.V. Williams Building (AVW)</a><br>Thursday, April 16, 2015, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Bitcoin, as an Internet protocol and a peer-to-peer network,</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">is largely outside the reach of regulation. Businesses and individuals</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">who use the network, however, are subject to a panoply of regulations,</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">many of which today are unclear and inconsistent. In this talk we will</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">explain what uses of cryptocurrencies like Bitcoin are regulated and</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">we will survey the different regulators and what actions they have</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">taken to date.</span></p><br><b>Bio:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Jerry Brito is executive director of Coin Center, the leading</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">non-profit research and advocacy group focused on the public policy</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">issues facing cryptocurrency technologies such as Bitcoin. He also</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">serves as adjunct professor of law at George Mason University.</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Jerry has testified several times before Congress and state</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">legislatures about cryptocurrencies, and regularly holds briefings for</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">and consultations with policy makers. He is the coauthor of Bitcoin: A</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Primer for Policymakers, as well as other scholarly work on the</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">regulation of cryptocurrencies. His op-eds have appeared in the Wall</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Street Journal, the New York Times, and elsewhere.</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Peter is Director of Research at Coin Center. He drafts the Center’s</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">public regulatory comments, and helps shape its research agenda. He</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">has briefed policymakers and regulatory staff around the world on the</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">subject of Bitcoin regulation. Previously, he was a Google Policy</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Fellow at TechFreedom and collaborated with various digital rights</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">organizations on projects related to privacy, surveillance, and</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">digital copyright law.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/9982015-04-13T10:44:31-04:002015-04-13T10:44:31-04:00https://talks.cs.umd.edu/talks/998Hash Functions from Defective Ideal CiphersAishwarya Thiruvengadam - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 17, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <pre style="font-size: 14px;">Cryptographic constructions are often designed and analyzed in idealized<br>frameworks such as the random-oracle or ideal-cipher models. When the<br>underlying primitives are instantiated in the real world, however, they<br>may be far from ideal. Constructions should therefore be robust to known<br>or potential defects in the lower-level primitives.<br><br>With this in mind, we study the construction of collision-resistant hash<br>functions from ``defective'' ideal ciphers. We introduce a model for ideal<br>ciphers that are vulnerable to differential related-key attacks, and explore<br>the security of the classical PGV constructions from such weakened<br>ciphers. We find that although none of the PGV compression functions<br>are collision-resistant in our model, it is possible to prove collision<br>resistance up to the birthday bound for iterated (Merkle-Damgard)<br>versions of four of the PGV constructions. These four resulting hash<br>functions are also optimally preimage-resistant.<br><br>This is joint work with Jonathan Katz and Stefan Lucks.</pre><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/10192015-04-21T15:07:31-04:002015-04-21T15:07:31-04:00https://talks.cs.umd.edu/talks/1019Oblivious Query Processing<a href="http://www.cs.umd.edu/~kartik/">Kartik Nayak - MC2</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 24, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">Motivated by cloud security concerns, there is an increasing interest in database systems that can store and support queries over encrypted data. A common architecture for such systems is to use a trusted component such as a cryptographic co-processor for query processing that is used to securely decrypt data and perform computations in plaintext. The trusted component has limited memory, so most of the (input and intermediate) data is kept encrypted in an untrusted storage and moved to the trusted component on “demand.” </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;"> </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">In this setting, even with strong encryption, the data access pattern from untrusted storage has the potential to reveal sensitive information; indeed, all existing systems that use a trusted component for query processing over encrypted data have this vulnerability. In this paper, we undertake the first formal study of secure query processing, where an adversary having full knowledge of the query (text) and observing the query execution learns nothing about the underlying database other than the result size of the query on the database. We introduce a simpler notion, oblivious query processing, and show formally that a query admits secure query processing iff it admits oblivious query processing. We present oblivious query processing algorithms for a rich class of database queries involving selections, joins, grouping and aggregation. For queries not handled by our algorithms, we provide some initial evidence that designing oblivious (and therefore secure) algorithms would be hard via reductions from two simple, well-studied problems that are generally believed to be hard. Our study of oblivious query processing also reveals interesting connections to database join theory.</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/10232015-04-27T13:25:54-04:002015-04-29T13:21:21-04:00https://talks.cs.umd.edu/talks/1023Threshold Multikey FHE based on LatticesLeo Fan - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, May 1, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">In this talk, I would like to describe how to construct threshold multikey FHE, </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">which is introduced in </span><a style="color: #1155cc; font-family: arial, sans-serif; font-size: 12.8000001907349px;" href="http://eprint.iacr.org/2015/345">http://eprint.iacr.org/2015/345</a><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8000001907349px;">.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/10312015-05-05T09:52:30-04:002015-05-05T09:52:30-04:00https://talks.cs.umd.edu/talks/1031An Downloader Graph-based Early Detection System for MalwareBumJun Kwon - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, May 8, 2015, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <pre style="white-space: pre-wrap; color: #222222; margin-top: 0px; margin-bottom: 0px;"><span style="font-family: arial, helvetica, sans-serif;"><span style="color: #000000;">Increased volume and sophistication of </span><span style="color: #000000;">malware</span><span style="color: #000000;"> has led recent research efforts to focus on content-agnostic </span><span style="color: #000000;">malware</span><span style="color: #000000;"> detection techniques. </span></span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">Existing works that use such techniques largely rely on the understanding of how the malicious software are distributed among client machines.</span></pre>
<pre style="white-space: pre-wrap; color: #222222; margin-top: 0px; margin-bottom: 0px;"><span style="font-family: arial, helvetica, sans-serif; color: #000000;">In this paper, we present a complementary study of </span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">analyzing</span><span style="font-family: arial, helvetica, sans-serif; color: #000000;"> the </span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">download activity </span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">behavior</span><span style="font-family: arial, helvetica, sans-serif; color: #000000;"> of the software once they are </span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">dropped at client machines and how these </span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">behavioral</span><span style="font-family: arial, helvetica, sans-serif; color: #000000;"> features could be used to distinguish malicious activity from benign </span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">behavior</span><span style="font-family: arial, helvetica, sans-serif; color: #000000;">.</span></pre>
<pre style="white-space: pre-wrap; color: #222222; margin-top: 0px; margin-bottom: 0px;"><span style="font-family: arial, helvetica, sans-serif;"><span style="color: #000000;">We introduce a novel graph-based abstraction called </span><span style="color: #000000;">download activity graph to describe the download activities on host machines. We also introduce the notion </span><span style="color: #000000;">influence graph, defined for each software, that characterizes the nature of download activity caused by the corresponding software. </span></span></pre>
<pre style="white-space: pre-wrap; color: #222222; margin-top: 0px; margin-bottom: 0px;"><span style="font-family: arial, helvetica, sans-serif;"><span style="color: #000000;">We use real-data from one of the largest security firm to construct the influence graphs and use data-driven techniques to uncover unique and explainable insights, e.g., (1) influence graph of </span><span style="color: #000000;">trojans</span><span style="color: #000000;"> and </span><span style="color: #000000;">ppi</span><span style="color: #000000;">-</span><span style="color: #000000;">malware</span><span style="color: #000000;"> tend to have high clustering coefficient but benign </span><span style="color: #000000;">downloaders</span><span style="color: #000000;"> show low clustering, (2) </span><span style="color: #000000;">adware</span><span style="color: #000000;"> have low cluster coefficient but their influence graphs vary lot less across machines compared to benign influence graphs, (3) about 5</span><span style="color: #000000;">0</span><span style="color: #000000;">% of </span><span style="color: #000000;">trojans</span><span style="color: #000000;"> have download cycles in their influence graphs, (4) influence graphs of </span><span style="color: #000000;">ppi</span><span style="color: #000000;">-</span><span style="color: #000000;">malware</span><span style="color: #000000;"> vary more across machines compared to </span><span style="color: #000000;">trojans</span><span style="color: #000000;"> and </span><span style="color: #000000;">adware</span><span style="color: #000000;">, (5) </span><span style="color: #000000;">ppi</span><span style="color: #000000;">-</span><span style="color: #000000;">malware</span><span style="color: #000000;"> have much longer download life cycle than </span><span style="color: #000000;">trojans</span><span style="color: #000000;"> and </span><span style="color: #000000;">adware</span><span style="color: #000000;">, and many more. </span></span></pre>
<pre style="white-space: pre-wrap; color: #222222; margin-top: 0px; margin-bottom: 0px;"><span style="font-family: arial, helvetica, sans-serif;"><span style="color: #000000;">Finally, we use these features to learn a classifier to classify </span><span style="color: #000000;">malware</span><span style="color: #000000;"> from benign software. Our classifier demonstrates high accuracy and low false positive rate. Our techniques also outperforms a competitive baseline based on </span><span style="color: #000000;">VirusTotal</span><span style="color: #000000;"> in early detection of unknown malicious software.</span></span></pre><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/11212015-09-10T11:58:32-04:002015-09-10T15:54:42-04:00https://talks.cs.umd.edu/talks/1121From 'penetrate and patch' to 'building security in'<a href="http://www.cs.umd.edu/~mwh/">Michael Hicks - University of Maryland, College Park</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=KEB">Atrium, Jeong H. Kim Engineering Building (KEB)</a><br>Monday, September 28, 2015, 4:00-5:00 pm<br><br><b>Abstract:</b> <p>Computer security has gone mainstream. The theft or corruption of computer data and services is no longer on the minds of a select few, it is a concern to us all. Hundreds of millions of normal people have suffered the consequences of cyber attacks, which we read about with increasing frequency.</p>
<p>While progress is being made to solve the computer security problem, the solutions that are easiest deploy, such as firewalls and anti-virus software, often address the symptoms, not the cause. Many cyber attacks work by exploiting a defect or poor practice in the construction of computer software. Most security technologies do not address such defects directly, but instead attempt to detect when an exploit might be taking place. Unfortunately, such detection is impossible to achieve with perfect accuracy, and so new attacks inevitably sneak through. In the end, the root defect is often only discovered after a successful attack, resulting in a regime of 'penetrate and patch.'</p>
<p>My research is based on the idea that we must address the root cause of our security problem, not the symptoms. We must "build security in" from the start, removing the most pernicious vectors of attack so they can never be exploited. A growing research community is developing new software languages and development tools to help produce software that is likely to be secure right from the start. I will talk about some of my contributions to this area. I will also talk about my efforts, both on campus and with on-line classes and contests, to educate computer scientists and cybersecurity professionals about the software security problem how we can fix it be rethinking software development.</p>
<p>This talk is meant for a general audience.</p><br><b>Bio:</b> <p>Michael W. Hicks is a Professor in the Computer Science department and UMIACS at the University of Maryland and is the former Director of the Maryland Cybersecurity Center (MC2). His research focuses on using programming languages and analyses to improve the security, reliability, and availability of software. He has explored the design of new programming languages and analysis tools for helping programmers find bugs and software vulnerabilities, and explored technologies to shorten patch application times by allowing software upgrades without downtime. He has taught a variety of innovative security courses, including a MOOC on software security offered by Coursera. He also led the development of a new security-oriented programming contest, "build-it, break-it, fix-it," which has been offered to the public and to his Coursera students. He blogs at http://www.pl-enthusiast.net/.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/11342015-09-24T08:59:43-04:002015-09-24T08:59:43-04:00https://talks.cs.umd.edu/talks/1134Cryptography in the Age of Quantum ComputersMark Zhandry<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, September 30, 2015, 1:00-2:00 pm<br><br><b>Abstract:</b> <p>It is well established that full-fledged quantum computers, when realized, will completely break many of today`s cryptosystems. This looming threat has led to the proposal of so-called "post-quantum" systems, namely those that appear resistant to quantum attacks. We argue, however, that the attacks considered in prior works model only the near future, where the attacker may be equipped with a quantum computer, but the end-users implementing the protocols are still running classical devices.<br> <br> Eventually, quantum computers will reach maturity and everyone — even the end-users — will be running quantum computers. In this event, attackers can interact with the end-users over quantum channels, opening up a new set of attacks that have not been considered before. In this talk, I will put forward new security models and new security analyses showing how to ensure security against such quantum channel attacks. In particular, these analyses allow for re-building many core cryptographic functionalities, including pseudorandom functions, encryption, digital signatures, and more, resulting in the first protocols that are safe to use in a ubiquitous quantum computing world. Along the way, we resolve several open problems in quantum query complexity, such as the Collision Problem for random functions, the Set Equality Problem, and the Oracle Interrogation Problem.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/6">CATS</a> ⋅ <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/11572015-10-08T08:33:13-04:002015-10-12T14:34:41-04:00https://talks.cs.umd.edu/talks/1157Additive and multiplicative notions of leakage, and their capacities<a href="https://sites.google.com/site/msalvimjr/">Mário S. Alvim - Federal University of Minas Gerais (Brasil)</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Monday, November 9, 2015, 10:00-11:00 am<br><br><b>Abstract:</b> <p>This talk covers work that won the <strong>2014 NSA Best Scientific Cybersecurity Paper</strong> Competition:</p>
<p>Protecting sensitive information from improper disclosure is a fundamental security goal. It is complicated, and difficult to achieve, often because of unavoidable or even unpredictable operating conditions that can lead to breaches in planned security defences. An attractive approach is to frame the goal as a quantitative problem, and then to design methods that measure system vulnerabilities in terms of the amount of information they leak. A consequence is that the precise operating conditions, and assumptions about prior knowledge, can play a crucial role in assessing the severity of any measured vunerability.</p>
<p>We develop this theme by concentrating on vulnerability measures that are robust in the sense of allowing general leakage bounds to be placed on a program, bounds that apply whatever its operating conditions and whatever the prior knowledge might be. In particular we propose a theory of channel capacity, generalising the Shannon capacity of information theory, that can apply both to additive- and to multiplicative forms of a recently-proposed measure known as g-leakage. Further, we explore the computational aspects of calculating these (new) capacities: one of these scenarios can be solved efficiently by expressing it as a Kantorovich distance, but another turns out to be NP-complete.</p>
<p>We also find capacity bounds for arbitrary correlations with data not directly accessed by the channel, as in the scenario of Dalenius's Desideratum.</p><br><b>Bio:</b> <p>Mário S. Alvim is (since 2013) an Assistant Professor in the Computer Science Department of the Federal University of Minas Gerais, one of the three CS departments in the country ranked as a center of excellence by the Brazilian Government. His research focus is formal methods for Information Hiding. He is particularly interested in Quantitative Information Flow, Information Theory, Statistical Disclosure Control, and Differential Privacy.</p>
<p>From January 2012 until September 2013 he was a post-doctoral researcher at the Department of Mathematics at the University of Pennsylvania under the supervision of Prof. Andre Scedrov, while also collaborating with Prof. Fred B. Schneider of Cornell University. He obtained his Ph.D. from LIX, École Polytechnique (France) in 2011 under the supervision of Prof. Catuscia Palamidessi. His dissertation on Formal Approaches to Information Hiding (more details about it here) was a finalist of the Prix de Thèse ParisTech 2011, granted by the Paris Institute of Technology (ParisTech), representing the best thesis in Computer Science among the 632 thesis defended that year in 12 of the most prestigious Grande Écoles in France. </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/11692015-10-21T18:12:28-04:002015-10-26T00:30:12-04:00https://talks.cs.umd.edu/talks/1169Privacy-Preserving Deep Learning<a href="http://www.shokri.org">Reza Shokri - University of Texas, Austin</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3450 A.V. Williams Building (AVW)</a><br>Tuesday, November 3, 2015, 4:00-5:00 pm<br><br><b>Abstract:</b> <p>Deep learning based on artificial neural networks is a very popular approach to modeling, classifying, and recognizing complex data such as images, speech, and text. The unprecedented accuracy of deep learning methods has turned them into the foundation of new AI-based services on the Internet. Commercial companies that collect user data on a large scale have been the main beneficiaries since the success of deep learning techniques is directly proportional to the amount of data available for training.</p>
<p>Massive data collection required for deep learning presents obvious privacy issues. Users' personal, highly sensitive data such as photos and voice recordings is kept indefinitely by the companies that collect it. Users can neither delete it, nor can restrict the purposes for which it is used. Furthermore, centrally kept data is subject to legal subpoenas and extra-judicial surveillance. In many situations, privacy and confidentiality concerns prevent data owners from sharing data and thus benefitting from large-scale deep learning.</p>
<p>In this talk, I will describe joint work with Prof. Vitaly Shmatikov on a practical system that enables multiple parties to collectively learn an accurate neural-network model for a given objective without sharing their input datasets. Our results indicate that this system offers an attractive point in the utility/privacy tradeoff space: participants preserve the privacy of their respective inputs, while still benefitting from other participants' models and thus boosting their learning accuracy beyond what is achievable solely on their own inputs.</p><br><b>Bio:</b> <p>Reza Shokri is a post-doctoral researcher at University of Texas at Austin, and is currently visiting Cornell NYC Tech. His research focuses on computational privacy: using statistical and machine-learning tools to evaluate and protect privacy. More info: <a class="moz-txt-link-abbreviated" title="www.shokri.org" href="http://www.shokri.org/">www.shokri.org</a> .</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/12782016-01-25T17:06:08-05:002016-02-18T17:21:59-05:00https://talks.cs.umd.edu/talks/1278Attacks on Searchable Encryption<a href="http://www.umiacs.umd.edu/~zhangyp">Yupeng Zhang - MC2</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">DCAPS Workshop, Room 3258 A.V. Williams Building (AVW)</a><br>Friday, February 19, 2016, 9:30-10:00 am<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">With the advent of cloud computing, techniques for outsourcing encrypted data with search capability are of significant interest. Searchable Encryption is proposed for such purpose. Searchable Encryption schemes achieves efficiency by allowing well-defined leakage. However, the practical consequence of the leakage has not been studied much in prior work. In this talk, I present query recovery attacks that exploit the leakage of Searchable Encryption and discuss potential countermeasures against these attacks.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/12792016-01-25T21:40:18-05:002016-02-01T12:54:23-05:00https://talks.cs.umd.edu/talks/1279Privacy-Preserving Shortest Path Computation<a href="https://www.cs.umd.edu/~wangxiao/">Xiao Wang - MC2</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 5, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Abstract—Navigation is one of the most popular cloud computing services. But in virtually all cloud-based navigation systems, the client must reveal her location and destination to the cloud service provider in order to learn the fastest route. In this work, we present a cryptographic protocol for navigation on city streets that provides privacy for both the client’s location and the service provider’s routing data. Our key ingredient is a novel method for compressing the next-hop routing matrices in networks such as city street maps. Applying our compression method to the map of Los Angeles, for example, we achieve over tenfold reduction in the representation size. In conjunction with other cryptographic techniques, this compressed representation results in an efficient protocol suitable for fully-private real-time navigation on city streets. We demonstrate the practicality of our protocol by benchmarking it on real street map data for major cities such as San Francisco and Washington, D.C.</span></p>
<p> </p>
<p><a style="color: #1155cc; font-family: arial, sans-serif; font-size: 12.8px;" href="http://crypto.stanford.edu/~dwu4/papers/PrivateShortestPaths.pdf">http://crypto.stanford.edu/~dwu4/papers/PrivateShortestPaths.pdf</a></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13082016-02-01T12:57:32-05:002016-02-01T12:57:32-05:00https://talks.cs.umd.edu/talks/1308WHYPER: Towards Automating Risk Assessment of Mobile ApplicationsZiyun Zhu - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 12, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">Application markets such as Apple’s App Store and Google’s Play Store have played an important role in the popularity of smartphones and mobile devices. However, keeping malware out of application markets is an ongoing challenge. While recent work has developed various techniques to determine what applications do, no work has provided a technical approach to answer, what do users expect? In this paper, we present the first step in addressing this challenge. Specifically, we focus on permissions for a given application and examine whether the application description provides any indication for why the application needs a permission. We present WHYPER, a framework using Natural Language Processing (NLP) techniques to identify sentences that describe the need for a given permission in an application description. WHYPER achieves an average precision of 82.8%, and an average recall of 81.5% for three permissions (address book, calendar, and record audio) that protect frequentlyused security and privacy sensitive resources. These results demonstrate great promise in using NLP techniques to bridge the semantic gap between user expectations and application functionality, further aiding the risk assessment of mobile applications. </span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13192016-02-10T11:01:43-05:002016-02-10T11:01:43-05:00https://talks.cs.umd.edu/talks/1319ReDeBug: Finding Unpatched Code Clones in Entire OS DistributionsOctavian Suciu - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 26, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>Programmers should never fix the same bug twice. Unfortunately this often happens when patches to buggy code are not propagated to all code clones. Unpatched code clones represent latent bugs, and for security-critical problems, latent vulnerabilities, thus are important to detect quickly. In this paper we present ReDeBug, a system for quickly finding unpatched code clones in OS-distribution scale code bases. While there has been previous work on code clone detection, ReDeBug represents a unique design point that uses a quick, syntax-based approach that scales to OS distributionsized code bases that include code written in many different languages. Compared to previous approaches, ReDeBug may find fewer code clones, but gains scale, speed, reduces the false detection rate, and is language agnostic. We evaluated ReDeBug by checking all code from all packages in the Debian Lenny/Squeeze, Ubuntu Maverick/Oneiric, all SourceForge C and C++ projects, and the Linux kernel for unpatched code clones. ReDeBug processed over 2.1 billion lines of code at 700,000 LoC/min to build a source code database, then found 15,546 unpatched copies of known vulnerable code in currently deployed code by checking 376 Debian/Ubuntu security-related patches in 8 minutes on a commodity desktop machine. We show the real world impact of ReDeBug by confirming 145 real bugs in the latest version of Debian Squeeze packages.</p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13512016-03-07T14:30:52-05:002016-03-10T16:49:06-05:00https://talks.cs.umd.edu/talks/1351Moat: Verifying Confidentiality of Enclave Programs & Stubborn Mining: Generalizing Selfish Mining and Combining with an Eclipse Attack<a href="https://www.cs.umd.edu/~amaloz/">Alex and Kartik - MC2</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, March 11, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>1st talk:</p>
<p>Security-critical applications constantly face threats from exploits in lower computing layers such as the operating system, virtual machine monitors, or even attacks from malicious administrators. To help protect application secrets from such attacks, there is increasing interest in hardware implementations of primitives for trusted computing, such as Intel’s Software Guard Extensions (SGX) instructions. These primitives enable hardware protection of memory regions containing code and data, and provide a root of trust for measurement, remote attestation, and cryptographic sealing. However, vulnerabilities in the application itself, such as the incorrect use of SGX instructions or memory safety errors, can be exploited to divulge secrets. In this paper, we introduce a new approach to formally model these primitives and formally verify properties of so-called enclave programs that use them. More specifically, we create formal models of relevant aspects of SGX, develop several adversary models, and present a sound verification methodology (based on automated theorem proving and information flow analysis) for proving that an enclave program running on SGX does not contain a vulnerability that causes it to reveal secrets to the adversary. We introduce Moat, a tool which formally verifies confidentiality properties of applications running on SGX. We evaluate Moat on several applications, including a one time password scheme, off-the-record messaging, notary service, and secure query processing.</p>
<p> </p>
<p>2nd talk:</p>
<p> </p>
<p>Selfish mining, originally discovered by Eyal et al., is a well-known attack where a selfish miner, under certain conditions, can gain a disproportionate share of reward by deviating from the honest behavior.</p>
<p> </p>
<p>In this paper, we expand the mining strategy space to include novel ``stubborn'' strategies that, for a large range of parameters, earn the miner more revenue. Consequently, we show that the selfish mining attack is not (in general) optimal.</p>
<p> </p>
<p>Further, we show how a miner can further amplify its gain by non-trivially composing mining attacks with network-level eclipse attacks. We show, surprisingly, that given the attacker's best strategy, in some cases victims of an eclipse attack can actually benefit from being eclipsed! </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13592016-03-21T16:23:37-04:002016-03-21T16:23:37-04:00https://talks.cs.umd.edu/talks/1359I Think They’re Trying to Tell Me Something: Advice Sources and Selection for Digital SecurityElissa Redmiles - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, March 25, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Users receive a multitude of digital- and physical-security advice every day. Indeed, if we implemented all the security advice we received, we would never leave our houses or use the Internet. Instead, users selectively choose some advice to accept and some (most) to reject; however, it is unclear whether they are effectively prioritizing what is most important or most useful. If we can understand from where and why users take security advice, we can develop more effective security interventions.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"> </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">As a first step, we conducted 25 semi-structured interviews of a demographically broad pool of users. These interviews resulted in several interesting findings: (1) participants evaluated digital-security advice based on the trustworthiness of the advice source, but evaluated physical-security advice based on their intuitive assessment of the advice content; (2) negative-security events portrayed in well-crafted fictional narratives with relatable characters (such as those shown in TV or movies) may be effective teaching tools for both digital- and physical-security behaviors; and (3) participants rejected advice for many reasons, including finding that the advice contains too much marketing material or threatens their privacy.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13702016-03-27T16:42:49-04:002016-03-27T16:42:49-04:00https://talks.cs.umd.edu/talks/1370You get where you're looking for: The impact of information sources on code securityDoowon Kim - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 1, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Vulnerabilities in Android code - including but not limited to insecure data storage, unprotected inter-component communication, broken TLS implementations, and violations of least privilege – have enabled real-world privacy leaks and motivated research cataloguing their prevalence and impact. Researchers have speculated that appification promotes security problems, as it increasingly allows inexperienced laymen to develop complex and sensitive apps. Anecdotally, Internet resources such as Stack Overflow are blamed for promoting insecure solutions that are naively copy-pasted by inexperienced developers.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"> </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">In this paper, we for the first time systematically analyzed how the use of information resources impacts code security. We first surveyed 295 app developers who have published in the Google Play market concerning how they use resources to solve security-related problems. Based on the survey results, we conducted a lab study with 54 Android developers (students and professionals), in which participants wrote security- and privacy relevant code under time constraints. The participants were assigned to one of four conditions: free choice of resources, Stack Overflow only, official Android documentation only, or books only. Those participants who were allowed to use only Stack Overflow produced significantly less secure code than those using, the official Android documentation or books, while participants using the official Android documentation produced significantly less functional code than those using Stack Overflow. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"> </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">To assess the quality of Stack Overflow as a resource, we surveyed the 139 threads our participants accessed during the study, finding that only 25% of them were helpful in solving the assigned tasks and only 17% of them contained secure code snippets. In order to obtain ground truth concerning the prevalence of the secure and insecure code our participants wrote in the lab study, we statically analyzed a random sample of 200,000 apps from Google Play, finding that 93.6% of the apps used at least one of the API calls our participants used during our study. We also found that many of the security errors made by our participants also appear in the wild, possibly also originating in the use of Stack Overflow to solve programming problems. Taken together, our results confirm that API documentation is secure but hard to use, while informal documentation such as Stack Overflow is more accessible but often leads to insecurity. Given time constraints and economic pressures, we can expect that Android developers will continue to choose those resources that are easiest to use; therefore, our results firmly establish the need for secure-but-usable documentation.</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13662016-03-25T11:49:59-04:002016-03-28T16:09:57-04:00https://talks.cs.umd.edu/talks/1366Accessing Data while Preserving PrivacyAdan O'Neill - Georgetown University<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, March 30, 2016, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">We initiate the rigorous study of the privacy-efficiency tradeoffs for secure outsourced database systems. Such systems, such as CryptDB (Popa et al., SOSP '11), try to mitigate the high cost of full-fledged cryptographic solutions by relaxing the security guarantees they provide. We introduce abstract models that capture the basic properties of these systems and the information they leak. These models allow performing a generic and implementation-independent investigation of the aforementioned tradeoffs. </span></p>
<div style="font-size: 12.8px;"><span style="font-size: small;">For "optimally efficient'' outsourced database systems, we show generic reconstruction attacks in weak adversarial models, in which the server learns the secret attributes of every record stored in the database. This points to inherent limitations of such systems. However, we go on to present a new model of differentially private' outsourced database systems, where differential privacy is preserved even against an attacker that controls the data and the queries made to it. We show how to build on differentially private sanitizers (Blum et al., STOC '08) to achieve this. This shows that by slightly relaxing efficiency, one can achieve meaningful notions of privacy here.</span></div>
<div class="gmail_extra" style="font-size: 12.8px;"> </div>
<div class="gmail_extra" style="font-size: 12.8px;"><span style="font-size: small;">Joint work with George Kellaris, George Kollios, and Kobbi Nissim.</span></div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13742016-03-28T19:57:57-04:002016-03-28T19:57:57-04:00https://talks.cs.umd.edu/talks/1374Accountability for Distributed SystemsAndreas Haeberlen - Department of Computer and Information Science, University of Pennsylvania<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">4172 A.V. Williams Building (AVW)</a><br>Wednesday, April 20, 2016, 3:00-4:00 pm<br><br><b>Abstract:</b> <p>Many of our everyday activities are now performed online - whether it is banking, shopping, or chatting with friends. Behind the scenes, these activities are implemented by large distributed systems that often contain machines from several different organizations. Usually, these machines do what we expect them to, but occasionally they 'misbehave' - sometimes by mistake, sometimes to gain an advantage, and sometimes because of a deliberate attack.</p>
<p>In society, accountability is widely used to counter such threats. Accountability incentivizes good performance, exposes problems, and builds trust between competing individuals and organizations. In this talk, I will argue that accountability is also a powerful tool for designing secure distributed systems. An accountable distributed system ensures that 'misbehavior' can be detected, and that it can be linked to a specific machine via some form of digital evidence. The evidence can then be used just like in the 'offline' world, e.g., to correct the problem and/or to take action against the responsible organizations.</p>
<p>I will give an overview of our progress towards accountable distributed systems, ranging from theoretical foundations and efficient algorithms to practical applications. I will also present one result in detail: a technique that can detect information leaks through covert timing channels.</p><br><b>Bio:</b> <p>Andreas Haeberlen is a Raj and Neera Singh Assistant Professor at the University of Pennsylvania. His research interests are in security, distributed systems, and networking. Andreas received his PhD degree in Computer Science from Rice University in 2009; he is the recipient Otto Hahn Medal by the Max Planck Society.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13752016-03-29T09:27:22-04:002016-03-29T09:30:14-04:00https://talks.cs.umd.edu/talks/1375Rigorous Foundations for Privacy in Statistical Databases<a href="http://www.cse.psu.edu/~ads22/">Adam Smith - Pennsylvania State University</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">4172 A.V. Williams Building (AVW)</a><br>Monday, April 18, 2016, 3:00-4:00 pm<br><br><b>Abstract:</b> <p>Consider an agency holding a large database of sensitive personal information -- medical records, census survey answers, web search records, or genetic data, for example. The agency would like to discover and publicly release global characteristics of the data (say, to inform policy or business decisions) while protecting the privacy of individuals' records. This problem is known variously as "statistical disclosure control", "privacy-preserving data mining" or "private data analysis".</p>
<p>I will begin by discussing what makes this problem difficult, and exhibit some of the nontrivial problems that plague simple attempts at anonymization and aggregation. Motivated by this, I will present differential privacy, a rigorous definition of privacy in statistical databases that has received significant attention. I'll explain some recent results on the design of differentially private algorithms, as well as the application of these ideas in contexts with no (previously) apparent connection to privacy.</p><br><b>Bio:</b> <p>Adam Smith is an associate professor in the Department of Computer Science and Engineering at Penn State. His research interests lie in data privacy and cryptography and their connections to information theory, statistical learning and quantum computing. He received his Ph.D. from MIT in 2004 and was subsequently a visiting scholar at the Weizmann Institute of Science and UCLA and a visiting professor at Boston University and Harvard. He received a 2009 Presidential Early Career Award for Scientists and Engineers (PECASE) and the 2016 Theory of Cryptography Test of Time Award (with Dwork, McSherry and Nissim).</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13782016-03-31T13:00:13-04:002016-03-31T13:00:13-04:00https://talks.cs.umd.edu/talks/1378Content Modification Attacks in Bilateral Teleoperation<a href="http://terpconnect.umd.edu/~nchopra/Site/Home.html">Nikhil Chopra - Mechanical Engineering UMD</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 8, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">In this talk, cyber security issues with respect to control systems</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">will be initially presented. Then the talk will focus on a specific</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">application, namely the problem of secure control of bilateral</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">teleoperators, where the networked control system design is focused on</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">coordinated control of two non-collocated robotic systems.</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Specifically, the vulnerability of bilateral teleoperation systems to</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">content modification attacks will be presented, wherein the attacker</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">can modify the states being exchanged between the two robots.</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Subsequently, attacks that lead to destabilization of the system and</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">the corresponding safety measures will be discussed. The</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">implementation of these attacks on two robotic systems in our lab will</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">also be presented.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13822016-04-04T09:51:23-04:002016-04-04T11:13:30-04:00https://talks.cs.umd.edu/talks/1382Challenges and Opportunities in the Federal Cybersecurity R&D Strategic PlanGreg Shannon - Assistant Director for Cybersecurity Strategy in the White House Office of Science and Technology Policy<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Monday, April 11, 2016, 4:00-5:00 pm<br><br><b>Abstract:</b> <p></p>
<p></p>
<p class="MsoNormal">The President has said that cybersecurity is one of the most important challenges we face as a Nation. The 2016 Federal Cybersecurity Research and Development Strategic Plan creates the science and technology to ensure America’s prosperity, national security in cyberspace.</p>
<p class="MsoNormal">To make cyberspace inherently more secure, the plan challenges the cybersecurity R&D community to provide methods and tools for deterring, protecting, detecting, and adapting to malicious cyber activities. The plan defines near-, mid-, and long-term goals to guide and evaluate progress.</p>
<ul>
<li>Near-term: achieve science and technology (S&T) advances that counter adversaries’ asymmetrical advantages with effective and efficient risk management.</li>
<li>Mid-term: reverse adversaries’ asymmetrical advantages by developing sustainably secure systems and operations.</li>
<li>Long-term: <span> </span>achieve S&T advances that deter malicious cyber activities, by increasing adversaries’ costs and risks, while also lowering their gains.</li>
</ul>
<p class="MsoNormal">After providing an overview of the plan, we’ll discuss the R&D challenges and objectives therein that emphasize opportunities for the research community to improve cybersecurity.<span> </span>The stated objectives provide a basis for measuring overall progress in the implementation of this plan though they do not address all areas of need and should not be considered comprehensive.</p>
<p class="MsoNormal">Let’s work together to make the internet more secure.</p><br><b>Bio:</b> <p></p>
<p><span style="font-size: 11.0pt; line-height: 107%; font-family: 'Calibri','sans-serif'; ">Dr. Greg Shannon is the Chief Scientist for the CERT® Division at Carnegie Mellon University's Software Engineering Institute, expanding the cybersecurity research, advancing national and international research agendas, and promoting data-driven science for cybersecurity.<span> </span>Shannon is currently on part-time detail to the White House Office of Science & Technology Policy as the Assistant Director for Cybersecurity Strategy. Shannon has served as the Chair of IEEE's Cybersecurity Initiative (2015) and the General Chair for the IEEE Symposium on Security & Privacy (2015).<span> </span>In 2012 he cofounded the Workshop on Learning from Authoritative Security Experiment Results (LASER, <a href="http://www.laser-workshop.org">www.laser-workshop.org</a>).<span> </span>Shannon received a BS in Computer Science from Iowa State University with minors in Mathematics, Economics, and Statistics. He earned his MS and PhD in Computer Sciences at Purdue University, on a fellowship from the Packard Foundation.<span> </span>He is a member of ACM and a Senior Member of IEEE.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13912016-04-12T12:02:33-04:002016-04-12T12:02:33-04:00https://talks.cs.umd.edu/talks/1391Cache Template Attacks: Automating Attacks on Inclusive Last-Level CachesAria Shahverdi - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 15, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Recent work on cache attacks has shown that CPU caches represent a powerful source of information leakage.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">However, existing attacks require manual identification of vulnerabilities, i.e., data accesses or instruction execution depending on secret information. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">In this paper, we present Cache Template Attacks. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">This generic attack technique allows us to profile and exploit cache-based information leakage of any program automatically, without prior knowledge of specific software versions or even specific system information. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Cache Template Attacks can be executed online on a remote system without any prior offline computations or measurements.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Cache Template Attacks consist of two phases. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">In the profiling phase, we determine dependencies between the processing of secret information, e.g., specific key inputs or private keys of cryptographic primitives, and specific cache accesses. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">In the exploitation phase, we derive the secret values based on observed cache accesses. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">We illustrate the power of the presented approach in several attacks, but also in a useful application for developers.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Among the presented attacks is the application of Cache Template Attacks to infer keystrokes and—even more severe—the identification of specific keys on Linux and Windows user interfaces. </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">More specifically, for lowercase only passwords, we can reduce the entropy per character from log2(26) = 4.7 to 1.4 bits on Linux systems.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Furthermore, we perform an automated attack on the Ttable-based AES implementation of OpenSSL that is as efficient as state-of-the-art manual cache attacks.</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/13952016-04-14T10:17:54-04:002016-04-14T10:24:54-04:00https://talks.cs.umd.edu/talks/1395Improving Android's Reliability and Security<a>Iulian Neamtiu - NJIT</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Tuesday, April 26, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>Android is the dominant mobile platform worldwide. My group has developed a variety of analyses aimed at improving Android's reliability and security. First, we will show how "software repository mining" can reveal common classes of errors in mobile apps. Second, we describe two tools, the A3E Android app explorer, and VALERA, a record-and-replay approach that helps with a variety of tasks, e.g., reproducing executions, finding and fixing concurrency bugs, and app profiling. Third, we present a static analysis that has found a new class of Android app errors we named "resume/restart errors”. Finally, we show how the aforementioned techniques can be combined to find and reduce the security risks posed by Android apps.</p>
<p> </p><br><b>Bio:</b> <p>Iulian Neamtiu is an Associate Professor in the Department of Computer Science at the New Jersey Institute of Technology. He received his Ph.D. from UMD CS in 2008, and between 2008-2015 he was an Assistant, then Associate Professor at the University of California, Riverside. His research areas span programming languages, security, software engineering, and smartphones, with an overarching goal of making software and smartphones more secure, efficient, dependable, as well as easy to maintain and modify. He is a recipient of the NSF CAREER award, the UCR Regents' Fellowship award, as well as two Google Research Awards. He is part of the 10-year Cyber-Security Collaborative Research Alliance (CRA), a joint effort between the Army Research Laboratory and five universities, whose goal is to advance the theoretical foundations of cyber science in the context of Army networks. His research has been funded by NSF, ARL, DARPA, Intel, and Google.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/14062016-04-19T14:18:13-04:002016-04-19T14:18:13-04:00https://talks.cs.umd.edu/talks/1406The Honey Badger of BFT ProtocolsAndrew Miller - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 22, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>The surprising success of cryptocurrencies has led to a surge of interest in deploying large scale, highly robust, Byzantine fault tolerant (BFT) proto- cols for mission-critical applications, such as finan- cial transactions. Although the conventional wisdom is to build atop a (weakly) synchronous protocol such as PBFT (or a variation thereof), such protocols rely critically on network timing assumptions, and only guarantee liveness when the network behaves as ex- pected. We argue these protocols are ill-suited for this deployment scenario.</p>
<p> </p>
<p>We present an alternative, HoneyBadgerBFT, the first practical asynchronous BFT protocol, which guarantees liveness without making any timing as- sumptions. We base our solution on a novel atomic broadcast protocol that achieves optimal asymptotic efficiency. We present an implementation and ex- perimental results to show our system can achieve throughput of tens of thousands of transactions per second, and scales to over a hundred nodes on a wide area network. We even conduct BFT experi- ments over Tor, without needing to tune any parame- ters. Unlike the alternatives, HoneyBadgerBFT sim- ply does not care about the underlying network.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/14092016-04-22T12:27:55-04:002016-04-22T12:40:14-04:00https://talks.cs.umd.edu/talks/1409The Ring of Gyges: Investigating the Future of Criminal Smart Contracts Ahmed Kosba - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 29, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="font-family: arial, helvetica, sans-serif; font-size: 12.8px;">Thanks to their anonymity (pseudonymity) and lack of trusted intermediaries, cryptocurrencies such as Bitcoin have created or stimulated growth in many businesses and communities, some of them regrettably criminal. Next-generation decentralized cryptocurrencies such as Ethereum will include rich scripting languages in support of </span><em style="font-family: arial, helvetica, sans-serif; font-size: 12.8px;">smart contracts</em><span style="font-family: arial, helvetica, sans-serif; font-size: 12.8px;">, general-purpose programs that autonomously intermediate transactions. We show how such smart contracts will enlarge the range of criminal activities that can exploit the pseudonymity and minimal trust assumptions of cryptocurrencies. We demonstrate the feasibility in the near future of </span><em style="font-family: arial, helvetica, sans-serif; font-size: 12.8px;">criminal smart contracts</em><span style="font-family: arial, helvetica, sans-serif; font-size: 12.8px;"> (CSCs) for leakage of confidential information, theft of cryptographic keys, and various real-world crimes (murder, arson, terrorism). Our results highlight the urgency of creating policy and technical safeguards against CSCs in order to realize the considerable promise of smart contracts for beneficial goals.</span></p>
<p><span style="font-family: arial, helvetica, sans-serif;"><span style="font-family: arial, helvetica, sans-serif; font-size: 12.8px;">Joint work with Ari Juels and Elaine Shi.</span></span></p>
<p> </p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/14122016-04-25T14:35:09-04:002016-04-25T14:35:09-04:00https://talks.cs.umd.edu/talks/141210-round Feistel is indifferentiable from an ideal cipherAishwarya Thiruvengadam - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Tuesday, April 26, 2016, 4:00-5:00 pm<br><br><b>Abstract:</b> <p>We revisit the question of constructing an ideal cipher from a random oracle. Coron et al. (Journal of Cryptology, 2014) proved that a 14-round Feistel network using random, independent, keyed round functions is indifferentiable from an ideal cipher, thus demonstrating the feasibility of such a construction. Left unresolved is the best possible efficiency of the transformation. We improve upon the result of Coron et al. and show that 10 rounds suffice.</p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/14282016-05-03T21:00:45-04:002016-05-04T15:13:11-04:00https://talks.cs.umd.edu/talks/1428An Inconvenient Trust: User Attitudes Toward Security and Usability Tradeoffs for Key-Directory Encryption SystemsWei Bai - MC2<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, May 6, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>Many critical communications now take place digitally, but recent revelations demonstrate that these communications can often be intercepted. To achieve true message privacy, users need end-to-end message encryption, in which the communications service provider is not able to decrypt the content. Historically, end-to-end encryption has proven extremely difficult for people to use correctly, but recently tools like Apple’s iMessage and Google’s End-to-End have made it more broadly accessible by using key-directory services. These tools (and others like them) sacrifice some security properties for convenience, which alarms some security experts, but little is known about how average users evaluate these tradeoff s. In a 52-person interview study, we asked participants to complete encryption tasks using both a traditional key-exchange model and a key-directory-based registration model. We also described the security properties of each (varying the order of presentation) and asked participants for their opinions. We found that participants understood the two models well and made coherent assessments about when different tradeoff s might be appropriate. Our participants recognized that the less-convenient exchange model was more secure overall, but found the security of the registration model to be “good enough” for many everyday purposes.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/15072016-09-06T09:03:33-04:002016-09-06T09:03:33-04:00https://talks.cs.umd.edu/talks/1507Topological attacks on Mobile Ad-hoc Networks (MANETs)Ariel Stulman - Jerusalem College of Technology<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Tuesday, September 13, 2016, 4:00-5:00 pm<br><br><b>Abstract:</b> <p>Mobile Ad-hoc Networks (MANETs) are a major candidate for delivering next-generation self-organizing technologies, and are applicable in multiple environments (i.e. IoT, VANETs, disaster-zones, etc.). The main focus of research, however, is geared towards routing efficiency, self-organization and other "management" issues; hence, the resulting native protocols tend to be vulnerable to various attacks. Over the years, work has been done to improve protocol security, with different solutions proposed for different types of attacks. These solutions, however, often compromise routing efficiency or require network overhead, and many are themselves a new attack venue.<br><br>In this talk one major topologically based attack against the Optimized Link State Routing protocol (OLSR) and the similar OSPF-m will be described. We will show how the attack can manifest into a full DoS or the gray- or black-hole attacks. We then describe a solution using fictitious nodes for defending OLSR from these attacks, employing the same tactics used by the attack itself for defense.</p><br><b>Bio:</b> <p><span class="il">Ariel</span> Stulman received his bachelor's degree in Technology and Applied Sciences from the Jerusalem College of Technology, Jerusalem, Israel. He then went on to get an M.Sc. from Bar-Ilan University, Ramat-Gan, Israel, in 2002. In 2005 he achieved a Ph.D. from the University of Reims Chanpagne-Ardenne, Reims, France. As of 2006 he holds a position in the computer department of the Jerusalem College of Technology.<br><br>His research interests are in the field of mobile protocol security, geared towards the forth-coming next generation technologies (i.e. IoT, VANETs, etc). He has also done work on software testing, formal methods and real-time systems.<br><br>Dr. Stulman is a member of the ACM (Senior) and IEEE, and is the founding director of the Cyber research group at JCT.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/15612016-09-27T10:17:20-04:002016-09-27T10:19:29-04:00https://talks.cs.umd.edu/talks/1561Defense in Depth: A Synthesis of Prospective and Retrospective Security<a href="http://www.cs.uvm.edu/~ceskalka/">Christian Skalka - University of Vermont</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3450 A.V. Williams Building (AVW)</a><br>Thursday, September 29, 2016, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <div>Retrospective security has become increasingly important to the theory and practice of cyber security, with auditing a crucial component of it. However, in systems where auditing is used, programs are typically instrumented to generate audit logs using manual, ad-hoc techniques. We propose a foundational semantics for auditing, intended to support provable correctness of program rewriting algorithms that instrument formal logging specifications. Correctness guarantees that the execution of an instrumented program produces sound and complete audit logs, properties defined by an information containment relation between logs and the program's logging semantics.</div>
<div> </div>
<div>We study two applications of our theory that support a defense in depth approach to security, in particular the combination of retrospective audit logging with prospective access control mechanisms in a single uniform policy specification. As a first application, we consider break-the-glass policies, which are common in healthcare informatics when the need to access information in emergency situations overrides "normal" security concerns. As a second application, we consider an in depth approach to a dynamic taint analysis defense against injection attacks, in the presence of partially trusted sanitization. A program rewriting implementation of these mechanisms for the OpenMRS medical records software system is current work in progress.</div><br><b>Bio:</b> <p>Christian Skalka is an Associate Professor in the Department of Computer Science and the Associate Dean for the College of Engineering and Mathematical Sciences at University of Vermont.</p>
<p>Chris's research lies in the intersection of computer science theory and practice. His work focuses on the design of programming languages, especially type disciplines, to support security and safety in programs.</p>
<p>Recently Chris's research has focused on information systems which combine embedded and mobile devices with machine learning data analysis, as well as diverse applications including snow hydrology and psychological sciences.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16362016-12-27T14:04:53-05:002017-02-22T09:09:09-05:00https://talks.cs.umd.edu/talks/1636Helping Johnny to Analyze Malware: A Usability-Optimized Decompiler and Malware Analysis User Study<a href="http://cs.umd.edu/~micinski/">Kristopher Micinski - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3450 A.V. Williams Building (AVW)</a><br>Wednesday, February 22, 2017, 1:00-1:30 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Analysis of malicious software is an essential task in computer security; it provides the necessary understanding to devise effective countermeasures and mitigation strategies. The level of sophistication and complexity of current malware continues to evolve significantly, as the recently discovered “Regin” malware family strikingly illustrates. This complexity makes the already tedious and time-consuming task of manual malware reverse engineering even more difficult and challenging. Decompilation can accelerate this process by enabling analysts to reason about a high-level, more abstract from of binary code. While significant advances have been made, state-of-theart decompilers still produce very complex and unreadable code and malware analysts still frequently go back to analyzing the assembly code. In this paper, we present several semantics-preserving code transformations to make the decompiled code more readable, thus helping malware analysts understand and combat malware. We have implemented our optimizations as extensions to the academic decompiler DREAM. To evaluate our approach, we conducted the first user study to measure the quality of decompilers for malware analysis. Our study includes 6 analysis tasks based on real malware samples we obtained from independent malware experts. We evaluate three decompilers: the leading industry decompiler Hex-Rays, the state-of-the-art academic decompiler DREAM, and our usability-optimized decompiler DREAM++. The results show that our readability improvements had a significant effect on how well our participants could analyze the malware samples. DREAM++ outperforms both Hex-Rays and DREAM significantly. Using DREAM++ participants solved 3× more tasks than when using Hex-Rays and 2× more tasks than when using DREAM.</span></p>
<p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Slides: http://www.ieee-security.org/TC/SP2016/slides/yakdan.pdf</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16372016-12-30T11:37:59-05:002017-02-01T15:25:08-05:00https://talks.cs.umd.edu/talks/1637Acing the IOC Game: Toward Automatic Discovery and Analysis of Open-Source Cyber Threat Intelligence<a href="https://www.cs.umd.edu/people/osuciu">Octavian Suciu - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, February 1, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-size: 12.8px;">To adapt to the rapidly evolving landscape of cyber threats, security professionals are actively exchanging Indicators of Compromise (IOC) (e.g., malware signatures, botnet IPs) through public sources (e.g. blogs, forums, tweets, etc.). Such information, often presented in articles, posts, white papers etc., can be converted into a machine-readable OpenIOC format for automatic analysis and quick deployment to various security mechanisms like an intrusion detection system. With hundreds of thousands of sources in the wild, the IOC data are produced at a high volume and velocity today, which becomes increasingly hard to manage by humans. Efforts to automatically gather such information from unstructured text, however, is impeded by the limitations of today's Natural Language Processing (NLP) techniques, which cannot meet the high standard (in terms of accuracy and coverage) expected from the IOCs that could serve as direct input to a defense system. In this paper, we present iACE, an innovation solution for fully automated IOC extraction. Our approach is based upon the observation that the IOCs in technical articles are often described in a predictable way: being connected to a set of context terms (e.g., "download") through stable grammatical relations. Leveraging this observation, iACE is designed to automatically locate a putative IOC token (e.g., a zip file) and its context (e.g., "malware", "download") within the sentences in a technical article, and further analyze their relations through a novel application of graph mining techniques. Once the grammatical connection between the tokens is found to be in line with the way that the IOC is commonly presented, these tokens are extracted to generate an OpenIOC item that describes not only the indicator (e.g., a malicious zip file) but also its context (e.g., download from an external source). Running on 71,000 articles collected from 45 leading technical blogs, this new approach demonstrates a remarkable performance: it generated 900K OpenIOC items with a precision of 95% and a coverage over 90%, which is way beyond what the state-of-the-art NLP technique and industry IOC tool can achieve, at a speed of thousands of articles per hour. Further, by correlating the IOCs mined from the articles published over a 13-year span, our study sheds new light on the links across hundreds of seemingly unrelated attack instances, particularly their shared infrastructure resources, as well as the impacts of such open-source threat intelligence on security protection and evolution of attack strategies.</span></p>
<p><span style="color: #222222; font-size: 12.8px;">Slides: https://www.umiacs.umd.edu/~dvotipka/misc/osuciu_020117.pdf</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16382016-12-30T11:40:08-05:002017-04-06T13:34:05-04:00https://talks.cs.umd.edu/talks/1638Where is the Digital Divide? A Survey of Security, Privacy, and Socioeconomics<a href="https://cs.umd.edu/~eredmiles/">Elissa Redmiles - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, April 26, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p>The behavior of the least-secure user can influence security and privacy outcomes for everyone else. Thus, it is important to understand the factors that influence the security and privacy of a broad variety of people. Prior work has suggested that users with differing socioeconomic status (SES) may behave differently; however, no research has examined how SES, advice sources, and resources relate to the security and privacy incidents users report. To address this question, we analyze a 3,000 respondent, census-representative telephone survey. We find that, contrary to prior assumptions, people with lower educational attainment report equal or fewer incidents as more educated people, and that users' experiences are significantly correlated with their advice sources, regardless of SES or resources.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16392016-12-30T11:42:02-05:002017-02-08T12:09:19-05:00https://talks.cs.umd.edu/talks/1639You’ve Got Vulnerability: Exploring Effective Vulnerability Notifications<a href="https://www.umiacs.umd.edu/~dvotipka/">Daniel Votipka - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, February 8, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p>Security researchers can send vulnerability notifications to take proactive measures in securing systems at scale. However, the factors affecting a notification’s efficacy have not been deeply explored. In this paper, we report on an extensive study of notifying thousands of parties of security issues present within their networks, with an aim of illuminating which fundamental aspects of noti- fications have the greatest impact on efficacy. The vulnerabilities used to drive our study span a range of protocols and considerations: exposure of industrial control systems; apparent firewall omissions for IPv6-based services; and exploitation of local systems in DDoS ampli- fication attacks. We monitored vulnerable systems for several weeks to determine their rate of remediation. By comparing with experimental controls, we analyze the impact of a number of variables: choice of party to contact (WHOIS abuse contacts versus national CERTs versus US-CERT), message verbosity, hosting an information website linked to in the message, and translating the message into the notified party’s local language. We also assess the outcome of the emailing process itself (bounces, automated replies, human replies, silence) and characterize the sentiments and perspectives expressed in both the human replies and an optional anonymous survey that accompanied our notifications. We find that various notification regimens do result in different outcomes. The best observed process was directly notifying WHOIS contacts with detailed information in the message itself. These notifications had a statistically significant impact on improving remediation, and human replies were largely positive. However, the majority of notified contacts did not take action, and even when they did, remediation was often only partial. Repeat notifications did not further patching. These results are promising but ultimately modest, behooving the security community to more deeply investigate ways to improve the effectiveness of vulnerability notifications.</p>
<p><span style="color: #222222; font-family: Georgia, Cambria, 'Times New Roman', Times, serif; font-size: 12.8px;">Slides: https://www.umiacs.umd.edu/~dvotipka/misc/YouveGotVuln.pdf</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16442017-01-10T15:55:13-05:002017-03-15T14:51:25-04:00https://talks.cs.umd.edu/talks/1644When SIGNAL hits the Fan: On the Usability and Security of State-of-the-Art Secure Mobile Messaging<a href="http://www.ece.umd.edu/~wbai/">Wei Bai - UMD ECE</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, March 15, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">In this paper we analyze the security and usability of the state-of-the-art secure mobile messenger SIGNAL. In the first part of this paper we discuss the threat model current secure mobile messengers face. In the following, we conduct a user study to examine the usability of SIGNAL’s security features. Specifically, our study assesses if users are able to detect and deter man-in-the-middle attacks on the SIGNAL protocol. Our results show that the majority of users failed to correctly compare keys with their conversation partner for verification purposes due to usability problems and incomplete mental models. Hence users are very likely to fall for attacks on the essential infrastructure of today’s secure messaging apps: the central services to exchange cryptographic keys. We expect that our findings foster research into the unique usability and security challenges of state of-the art secure mobile messengers and thus ultimately result in strong protection measures for the average user.</span></p>
<p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Slides: https://www.umiacs.umd.edu/~dvotipka/misc/ReadingGroupSlides_Wei.pdf</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16472017-01-17T08:30:00-05:002017-01-17T08:30:00-05:00https://talks.cs.umd.edu/talks/1647Information Flow Security in Practical Systems<a href="http://www.andrew.cmu.edu/user/liminjia/">Limin Jia - Carnegie Mellon University</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, March 3, 2017, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>Users routinely type sensitive data such as passwords, credit card numbers, and even SSN into their mobile phone apps and browsers. Rich functionality combined with weak security mechanisms makes protecting users’ data a challenging. In this talk, I will present a few case studies of applying information flow security to protecting users’ data in Android, the Chromium browser, and the IFTTT framework. For these systems, we show that dynamic coarse-grained taint tracking, even though it allows implicit flows, can be retrofitted into existing systems to defend users’ data from common attacks. I will explain the challenges in striking a balance between preserving key functionality of legacy systems and ensuring formally provable security guarantees and discuss how different modeling techniques affect noninterference proofs. </p><br><b>Bio:</b> <p>Dr. Jia is an Assistant Research Professor in the ECE Department at Carnegie Mellon University. Dr. Jia received her PhD in Computer Science from Princeton University. She received her BE in Computer Science and Engineering from the University of Science and Technology in China. Dr. Jia's research interests are in formal aspects of software security, in particular, applying formal approaches to constructing software systems with known security guarantees.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16772017-02-01T15:27:35-05:002017-03-29T17:56:18-04:00https://talks.cs.umd.edu/talks/1677Measuring PUP Prevalence and PUP Distribution through Pay-Per-Install ServicesZiyun Zhu - UMD ECE<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, March 29, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; margin: 0px 0px 0.833em; padding: 0px; border: 0px; font-family: Arial, Tahoma, Verdana, sans-serif; font-size: 14px; line-height: 20px; vertical-align: baseline;">Potentially unwanted programs (PUP) such as adware and rogueware, while not outright malicious, exhibit intrusive behavior that generates user complaints and makes security vendors flag them as undesirable. PUP has been little studied in the research literature despite recent indications that its prevalence may have surpassed that of malware.</div>
<div style="color: #222222; margin: 0px 0px 0.833em; padding: 0px; border: 0px; font-family: Arial, Tahoma, Verdana, sans-serif; font-size: 14px; line-height: 20px; vertical-align: baseline;">In this work we perform the first systematic study of PUP prevalence and its distribution through pay-perinstall (PPI) services, which link advertisers that want to promote their programs with affiliate publishers willing to bundle their programs with offers for other software. Using AV telemetry information comprising of 8 billion events on 3.9 million real hosts during a 19 month period, we discover that over half (54%) of the examined hosts have PUP installed. PUP publishers are highly popular, e.g., the top two PUP publishers rank 15 and 24 amongst all software publishers (benign and PUP). Furthermore, we analyze the who-installs-who relationships, finding that 65% of PUP downloads are performed by other PUP and that 24 PPI services distribute over a quarter of all PUP. We also examine the top advertiser programs distributed by the PPI services, observing that they are dominated by adware running in the browser (e.g., toolbars, extensions) and rogueware. Finally, we investigate the PUP-malware relationships in the form of malware installations by PUP and PUP installations by malware. We conclude that while such events exist, PUP distribution is largely disjoint from malware distribution.</div>
<div style="color: #222222; margin: 0px 0px 0.833em; padding: 0px; border: 0px; font-family: Arial, Tahoma, Verdana, sans-serif; font-size: 14px; line-height: 20px; vertical-align: baseline;">Slides: https://www.umiacs.umd.edu/~dvotipka/misc/ziyun_slides.pdf</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/16972017-02-13T12:23:22-05:002017-02-15T15:08:06-05:00https://talks.cs.umd.edu/talks/1697User Interaction and Permission use on Android<a href="https://www.umiacs.umd.edu/~dvotipka/">Daniel Votipka - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, February 15, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p class="p1">Android and other mobile operating systems ask users for authorization before allowing apps to access sensitive resources such as contacts and location. We hypothesize that such authorization systems could be improved by becoming more integrated with the app’s user interface. In this paper, we conduct two studies to test our hypothesis. First, we use AppTracer, a dynamic analysis tool we developed, to measure to what extent user interactions and sensitive resource use are related in existing apps. Second, we conduct an online survey to examine how different interactions with the UI affect users’ expectations about whether an app accesses sensitive resources. The results of our studies suggest that user interactions such as button clicks can be interpreted as authorization, reducing the need for separate requests; but that accesses not directly tied to user interactions should be separately authorized, possibly when apps are first launched.</p>
<p class="p1"> </p>
<p class="p1">Slides: umiacs.umd.edu/~dvotipka/misc/<span style="font-variant-ligatures: no-common-ligatures;">AppTracerSRG.pdf</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17102017-02-21T10:52:56-05:002017-02-21T10:52:56-05:00https://talks.cs.umd.edu/talks/1710HOP: Hardware makes Obfuscation Practical <a href="https://www.cs.umd.edu/~kartik/">Kartik Nayak - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3450 A.V. Williams Building (AVW)</a><br>Wednesday, February 22, 2017, 1:30-2:00 pm<br><br><b>Abstract:</b> <div class="m_2513655749940775168gmail_msg" style="font-family: arial, sans-serif; font-size: 12.8px; color: #212121;">Program obfuscation is a central primitive in cryptography and has important real-world applications in protecting software from IP theft. However, well-known results from the cryptographic literature have shown that software only virtual black box (VBB) obfuscation of general programs is impossible. In this paper we propose HOP, a system (with matching theoretic analysis) that achieves simulation-secure obfuscation for RAM programs, using secure hardware to circumvent previous impossibility results. To the best of our knowledge, HOP is the first implementation of a provably secure VBB obfuscation scheme in any model under any assumptions. </div>
<div class="m_2513655749940775168gmail_msg" style="font-family: arial, sans-serif; font-size: 12.8px; color: #212121;"> </div>
<div class="m_2513655749940775168gmail_msg" style="font-family: arial, sans-serif; font-size: 12.8px; color: #212121;">HOP trusts only a hardware single-chip processor. We present a theoretical model for our hardware design and prove its security in the UC framework. Our goal is both provable security and practicality. To this end, our theoretic analysis accounts for all optimizations used in our practical design, including the use of a hardware Oblivious RAM (ORAM), hardware scratchpad memories, instruction scheduling techniques and context switching. We then detail a prototype hardware implementation of HOP. The design requires 72% of the area of a V7485t Field Programmable Gate Array (FPGA) chip. Evaluated on a variety of benchmarks, HOP achieves an overhead of 8× ∼ 76× relative to an insecure system. Compared to all prior (not implemented) work that strives to achieve obfuscation, HOP improves performance by more than three orders of magnitude. We view this as an important step towards deploying obfuscation technology in practice.</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17212017-03-01T09:05:36-05:002017-03-01T09:05:36-05:00https://talks.cs.umd.edu/talks/1721Privacy-Preserving Search of Similar Patients in Genomic DataShai Halevi - IBM<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Thursday, March 9, 2017, 11:00-11:59 am<br><br><b>Abstract:</b> <p>The growing availability of genomic data holds great promise for advancing medicine and research, but unlocking its full potential requires adequate methods for protecting the privacy of individuals whose genome data we use. One example of this tension is running Similar Patient Query on remote genomic data: In this setting a doctor that holds the genome of his/her patient may try to find other individuals with "close" genomic data (in edit distance), and use the data of these individuals to help diagnose and find effective treatment for that patient's conditions. This is clearly a desirable mode of operation, however, the privacy exposure implications are considerable, so we would like to carry out the above "closeness" computation in a privacy preserving manner.<br> <br> Secure-computation techniques offer a way out of this dilemma, but the high cost of computing edit distance privately poses a great challenge. Wang et al. proposed recently [ACM-CCS'15] an efficient solution, for situations where the genome sequences are so close that edit distance between two genomes can be well approximated just by looking at the indexes in which they differ from the reference genome. However, this solution does not extend well to cases with high divergence among individual genomes, and different techniques are needed there.<br> <br> In this work we put forward a new approach for highly efficient edit-distance approximation, that works well even in settings with much higher divergence. We present contributions both in the design the approximation method itself and in the protocol for computing it privately. Our tests indicate that our approximation method works well even in regions of the genome where the distance between individuals is 5% or more with many insertions and deletions (compared to 99.5% similarly with mostly substitutions, as considered by Wang et al.). As for speed, our protocol implementation takes just a few seconds to run on databases with thousands of records, each of length thousands of alleles, and it scales almost linearly with both the database size and the length of the sequences in it. As an example, in the datasets of the recent iDASH competition, it takes less than two seconds to find the nearest five records to the query, in a size 500 dataset of length 3500 sequences. This is 2-3 orders of magnitude faster than using straightforward approaches.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17312017-03-08T08:05:02-05:002017-04-06T11:43:04-04:00https://talks.cs.umd.edu/talks/1731pASSWORD tYPOS and How to Correct Them Securely<a href="https://www.cs.umd.edu/people/aish">Aishwarya Thiruvengadam - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, May 3, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="color: #333333; font-family: sans-serif; font-size: 15px;">We provide the first treatment of typo-tolerant password authentication for arbitrary user-selected passwords. Such a system, rather than simply rejecting a login attempt with an incorrect password, tries to correct common typographical errors on behalf of the user. Limited forms of typo-tolerance have been used in some industry settings, but to date there has been no analysis of the utility and security of such schemes. We quantify the kinds and rates of typos made by users via studies conducted on Amazon Mechanical Turk and via instrumentation of the production login infrastructure at Dropbox. The instrumentation at Dropbox did not record user passwords or otherwise change authentication policy, but recorded only the frequency of observed typos. Our experiments reveal that almost 10% of login attempts fail due to a handful of simple, easily correctable typos, such as capitalization errors. We show that correcting just a few of these typos would reduce login delays for a significant fraction of users as well as enable an additional 3% of users to achieve successful login. We introduce a framework for reasoning about typo-tolerance, and investigate the seemingly inherent tension here between security and usability of passwords. We use our framework to show that there exist typo-tolerant authentication schemes that can get corrections for "free": we prove they are as secure as schemes that always reject mistyped passwords. Building off this theory, we detail a variety of practical strategies for securely implementing typo-tolerance.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17322017-03-08T08:06:26-05:002017-03-29T18:02:34-04:00https://talks.cs.umd.edu/talks/1732Cryptographically Protected Database Search<a href="http://benjamin-fuller.uconn.edu/">Benjamin Fuller - University of Connecticut</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, April 19, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Abstract: Protected database search systems cryptographically isolate the roles of reading from, writing to, and administering the database. This separation limits unnecessary administrator access and protects data in the case of system breaches. Since protected search was introduced in 2000, the area has grown rapidly; systems are offered by academia, start-ups, and established companies. </div>
<p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">However, there is no best protected search system or set of techniques. Design of such systems is a balancing act between security, functionality, performance, and usability. This challenge is made more difficult by ongoing database specialization, as some users will want the functionality of SQL, NoSQL, or NewSQL databases. This database evolution will continue, and the protected search community should be able to quickly provide functionality consistent with newly invented databases. </span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">At the same time, the community must accurately and clearly characterize the tradeoffs between different approaches. In this talk, we survey the range of tradeoffs between security and privacy. In particular, we </span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">1) identify of the important primitive operations across database paradigms,</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">2) evaluate of the current state of protected search systems in implementing these base operations, and </span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">3) analyze of attacks against protected search for different base queries. </span></p><br><b>Bio:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Benjamin Fuller is an Assistant Professor of Computer Science and Engineering at the University of Connecticut. His research focuses on driving cryptography to use in practice. His primary interests are authentication and searchable encryption. He has worked on a variety of problems from testing broadcast encryption while flying to scanning his iris for cryptographic key derivation. Prior to joining UConn, Ben was a research scientist at MIT Lincoln Laboratory from 2007-2016 working on searchable encryption. He received his PhD and MA from Boston University in 2015 and 2011 respectively.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17332017-03-08T08:08:02-05:002017-04-10T10:30:47-04:00https://talks.cs.umd.edu/talks/1733Driller: Augmenting Fuzzing Through Selective Symbolic Execution<a href="https://www.cs.umd.edu/people/willem">Willem Wyndham - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, April 12, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Memory corruption vulnerabilities are an everpresent risk in software, which attackers can exploit to obtain unauthorized access to confidential information. As products with access to sensitive data are becoming more prevalent, the number of potentially exploitable systems is also increasing, resulting in a greater need for automated software vetting tools. DARPA recently funded a competition, with millions of dollars in prize money, to further research focusing on automated vulnerability finding and patching, showing the importance of research in this area. Current techniques for finding potential bugs include static, dynamic, and concolic analysis systems, which each having their own advantages and disadvantages. A common limitation of systems designed to create inputs which trigger vulnerabilities is that they only find shallow bugs and struggle to exercise deeper paths in executables.</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"> </div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">We present Driller, a hybrid vulnerability excavation tool which leverages fuzzing and selective concolic execution in a complementary manner, to find deeper bugs. Inexpensive fuzzing is used to exercise compartments of an application, while concolic execution is used to generate inputs which satisfy the complex checks separating the compartments. By combining the strengths of the two techniques, we mitigate their weaknesses, avoiding the path explosion inherent in concolic analysis and the incompleteness of fuzzing. Driller uses selective concolic execution to explore only the paths deemed interesting by the fuzzer and to generate inputs for conditions that the fuzzer cannot satisfy. We</div>
<div style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">evaluate Driller on 126 applications released in the qualifying event of the DARPA Cyber Grand Challenge and show its efficacy by identifying the same number of vulnerabilities, in the same time, as the top-scoring team of the qualifying event.</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17402017-03-21T08:29:11-04:002017-03-21T08:29:11-04:00https://talks.cs.umd.edu/talks/1740CacheBleed: A Timing Attack on OpenSSL Constant Time RSAYuval Yarom<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Tuesday, March 28, 2017, 3:00-4:00 pm<br><br><b>Abstract:</b> <p>In recent years, microarchitecturral attacks have become a significant threat to cryptographic software and hardware. In particular, cache-based side channel attacks have had devastating effects on the underlying cryptographic primitive, often resulting in complete key compromises. In response, implementations have adopted a "constant-time" programming approach to mitigate the attacks. <br> <br> Constant-time is a name for a collection of techniques that ensure that the execution of a cryptographic algorithm does not leak secret information via timing, execution path or memory access. In a nutshell, it requires that the program uses operations whose timing is constant, and does not use secret-dependent memory accesses or branches.<br> <br> To reduce the performance overhead of constant-time programming, developers have explored some relaxations of the model. In this talk I will cover some cache-based side-channel attacks and demonstrate how relaxing constant-time programming often renders implementations vulnerable.<br> <br> The talk is self contained and assumes no specialist knowledge of either processor microarchitecture or cryptography. This is a joint work with Daniel Genkin and Nadia Heninger.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17462017-03-28T14:27:29-04:002017-03-28T14:27:29-04:00https://talks.cs.umd.edu/talks/1746LightDP: towards automating differential privacy proofs<a href="http://www.cs.umd.edu/~mwh/">Mike Hicks - UMD CS</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, March 29, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p style="font-size: 12.8px;">The growing popularity and adoption of differential privacy in academic and industrial settings has resulted in the development of increasingly sophisticated algorithms for releasing information while preserving privacy. Accompanying this phenomenon is the natural rise in the development and publication of incorrect algorithms, thus demonstrating the necessity of formal verification tools. However, existing formal methods for differential privacy face a dilemma: methods based on customized logics can verify sophisticated algorithms but come with a steep learning curve and significant annotation burden on the programmers, while existing programming platforms lack expressive power for some sophisticated algorithms.</p>
<p style="font-size: 12.8px;">In this paper, we present LightDP, a simple imperative language that strikes a better balance between expressive power and usability. The core of LightDP is a novel relational type system that separates relational reasoning from privacy budget calculations. With dependent types, the type system is powerful enough to verify sophisticated algorithms where the composition theorem falls short. In addition, the inference engine of LightDP infers most of the proof details, and even searches for the proof with minimal privacy cost when multiple proofs exist. We show that LightDP verifies sophisticated algorithms with little manual effort.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17682017-04-12T21:05:00-04:002017-04-12T21:05:00-04:00https://talks.cs.umd.edu/talks/1768eXpose: A Character-Level Convolutional Neural Network with Embeddings For Detecting Malicious URLs, File Paths and Registry KeysKonstantin Berlin - Sophos<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 14, 2017, 12:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="font-family: 'Lucida Grande',helvetica,arial,verdana,sans-serif; font-size: 14.4px; font-variant-ligatures: normal; background-color: #ffffff;">For years security machine learning research has promised to obviate the need for signature based detection by automatically learning to detect indicators of attack. Unfortunately, this vision hasn't come to fruition: in fact, developing and maintaining today's security machine learning systems can require engineering resources that are comparable to that of signature-based detection systems, due in part to the need to develop and continuously tune the "features" these machine learning systems look at as attacks evolve. Deep learning, a subfield of machine learning, promises to change this by operating on raw input signals and automating the process of feature design and extraction. In this paper we propose the eXpose neural network, which uses a deep learning approach we have developed to take generic, raw short character strings as input (a common case for security inputs, which include artifacts like potentially malicious URLs, file paths, named pipes, named mutexes, and registry keys), and learns to simultaneously extract features and classify using character-level embeddings and convolutional neural network. In addition to completely automating the feature design and extraction process, eXpose outperforms manual feature extraction based baselines on all of the intrusion detection problems we tested it on, yielding a 5%-10% detection rate gain at 0.1% false positive rate compared to these baselines.</span></p><br><b>Bio:</b> <p>Dr. Berlin is currently Director of Data Science Research and Senior Principal Investigator in the Data Science group at Sophos</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17792017-04-26T11:08:49-04:002017-04-26T11:08:49-04:00https://talks.cs.umd.edu/talks/1779To Catch a Ratter: Monitoring the Behavior of Amateur DarkComet RAT Operators in the Wild<a href="http://mason.gmu.edu/~mrezaeir/Aboutme.htm">Mohammad Rezaeirad - George Mason University</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, May 10, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <div>Remote Access Trojans (RATs) give remote attackers interactive control over a compromised machine. Unlike large- scale malware such as botnets, a RAT is controlled individually by a human operator interacting with the compromised machine remotely. The versatility of RATs makes them attractive to actors of all levels of sophistication: they’ve been used for espionage, information theft, voyeurism and extortion. Despite their increasing use, there are still major gaps in our understanding of RATs and their operators, including motives, intentions, procedures, and weak points where defenses might be most effective.</div>
<div>
<div class="m_-5764702238348747311page" title="Page 1">
<div class="m_-5764702238348747311layoutArea">
<div class="m_-5764702238348747311column">
<p style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">In this work we study the use of DarkComet, a popular commercial RAT. We collected 19,109 samples of DarkComet malware found in the wild, and in the course of two, several week-long experiments, ran as many samples as possible in our honeypot environment. By monitoring a sample’s behavior in our system, we are able to reconstruct the sequence of operator actions, giving us a unique view into operator behavior. We report on the results of 2,747 interactive sessions captured in the course of the experiment. During these sessions operators frequently attempted to interact with victims via remote desktop, to capture video, audio, and keystrokes, and to exfiltrate files and credentials. To our knowledge, we are the first large-scale systematic study of RAT use. </p>
<p style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"><span style="font-size: 12.8px;">link: </span><a style="color: #1155cc; font-size: 12.8px;" href="https://people.eecs.berkeley.edu/~pearce/papers/rats_oakland_2017.pdf">https://people.eecs.<wbr></wbr>berkeley.edu/~pearce/papers/<wbr></wbr>rats_oakland_2017.pdf</a></p>
</div>
</div>
</div>
</div><br><b>Bio:</b> <p><span style="color: #222222; font-family: 'Times New Roman'; font-size: 16px;">Mohammad Rezaeirad is a Ph.D. student with interests in Cyber-Physical System security, measurement studies and cryptography. Mohammad works under supervision of Dr. Damon McCoy. Prior to join George Mason, he obtained his master’s degree in Computer Science from University of Louisiana and a BS in Security Technologies, from the Multimedia university.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/17902017-05-16T18:56:36-04:002017-05-16T18:56:36-04:00https://talks.cs.umd.edu/talks/1790Building Provably Secure Computer Systems against Timing Channels<a href="http://www.cse.psu.edu/~dbz5017/">Danfeng Zhang - Penn State University</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, May 26, 2017, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>Timing channels have long been a difficult and important problem for computer security. The difficulty has been recognized since the 70's, but their importance has been reinforced by recent work that shows timing information can quickly leak sensitive information, such as private keys of RSA and AES. Such threats greatly harm the security of many emerging applications, including cloud computing.</p>
<p>In this talk, I will introduce novel programming languages for full-system control of timing channels. First, I will introduce a light-weight software-hardware contract which enables precise reasoning about timing channels in programming languages. Second, I will show that with such a contract, a novel type system is sufficient to provably control all timing leakage, assuming the hardware obeys the contract. Third, I will introduce a new hardware description language, SecVerilog, which enables formal verification of an efficient MIPS processor that obeys the contract. Evaluation on real-world security-sensitive applications suggest that the proposed approach has reasonable performance.</p><br><b>Bio:</b> <p>Danfeng Zhang is an Assistant Professor in Computer Science and Engineering at Penn State University. He received his BS and MS degrees from Peking University, and his PhD degree from Cornell University.</p>
<p>Dr. Zhang's research interests include computer security and programming languages. His research focuses on designing programming models with rigorous security guarantees and minimal burden on programmers. His recent projects include sound and practical methods for full-system timing channel mitigation, language-based differential privacy proofs, as well as general methods of error localization. </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18392017-08-01T10:02:59-04:002017-08-01T10:02:59-04:00https://talks.cs.umd.edu/talks/1839A Socio-Technial Approach to Global Cybersecurity<a href="www.ghitamezzour.com">Ghita Mezzour - International University of Rabat in Morocco</a><br>2116 Hornbake South<br>Wednesday, August 2, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Studying international aspects of cyber security requires taking into</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">account both technical and social dimensions. However, the majority of</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">cyber security research has only focused on the technical dimension. In</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">my work, I study international cyber-security using a socio-technical</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">approach that combines data science techniques, computational models,</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">and network science techniques.</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">I will start by presenting my work on empirically identifying factors</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">behind international variation in cyber attack exposure and hosting. I</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">use data from 10 million computers worldwide provided by a key</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">anti-virus vendor. The results of this work indicate that reducing</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">attack exposure and hosting in the most affected countries requires</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">addressing both social and technical issues such as corruption and</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">computer piracy. Then, I will present a computational methodology to</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">assess countries’ cyber warfare capabilities. The methodology captures</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">political factors that motivate countries to develop these capabilities</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">and technical factors that enable such development. Together, these</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">projects show that bridging the social and technical dimensions of cyber</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">security can improve our understanding of the dynamics of international</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">cyber security and have a real-world impact.</span></p><br><b>Bio:</b> <p><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Ghita Mezzour is a visiting professor at the University of Maryland</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Institute for Advanced Computer Studies (UMIACS) at the University of</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Maryland. She is also an assistant professor at the International</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">University of Rabat in Morocco. Her research combines cyber security,</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">data science and social networks. Ghita received a PhD in Social</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Computing from Carnegie Mellon University in 2015. She received a Master</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">and a Bachelor in Communication Systems from Ecole Polytechnique</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Federale de Lausanne in 2008 and 2006 respectively.</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Ghita was selected as a Rising Star by MIT’s Electrical Engineering</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">and Computer Science Department in November 2015. She served as a</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">program chair of the 16th International Conference on Hybrid Intelligent</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Systems and is currently an associate editor of Engineering Applications</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">of Artificial Intelligence (Elsevier).</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><span style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;">Contact info:</span><br style="color: #500050; font-family: arial, sans-serif; font-size: 12.8px;"><a style="color: #1155cc; font-family: arial, sans-serif; font-size: 12.8px;" href="mailto:ghita.mezzour@uir.ac.ma">ghita.mezzour@uir.ac.ma</a></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18452017-08-25T18:46:15-04:002017-08-25T18:46:15-04:00https://talks.cs.umd.edu/talks/1845Securing Databases from Probabilistic InferenceMarco Guarnieri - ETH Zurich<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Thursday, August 31, 2017, 10:00-11:00 am<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">Databases can leak confidential information when users combine query results with probabilistic data dependencies and prior knowledge. Current research efforts offer mechanisms that either handle a limited class of dependencies or lack tractable enforcement algorithms necessary for scaling.</span><br style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;"><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial; display: inline !important; float: none;">We propose a foundation for Database Inference Control based on PROBLOG, a probabilistic logic programming language. We leverage this foundation to develop ANGERONA, a provably secure enforcement mechanism that prevents information leakage in the presence of probabilistic dependencies. We then provide a tractable inference algorithm for a practically relevant fragment of PROBLOG. We empirically evaluate ANGERONA's performance showing that it scales to relevant problems of interest.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18462017-08-28T13:57:29-04:002017-08-28T14:49:58-04:00https://talks.cs.umd.edu/talks/1846Cryptographic Perspectives on the Future of Privacy<a href="http://www.cs.umd.edu/~jkatz">Jonathan Katz - Department of Computer Science, University of Maryland</a><br><a href="https://tltc.umd.edu/esj">2204 Edward St. John Learning & Teaching Center (ESJ)</a><br>Wednesday, September 6, 2017, 4:00-5:00 pm<br><br><b>Abstract:</b> <p>This is Dr. Katz's Distinguished Scholar-Teacher talk. It targets a general audience while aiming to also be interesting to experts.</p>
<p>The Distinguished Scholar-Teacher Program, established in 1978, honors a small number of faculty members each year who have demonstrated notable success in both scholarship and teaching. By honoring the Distinguished Scholar-Teachers with this prestigious award, we reaffirm our commitment to excellence in teaching and scholarship. The Distinguished Scholar-Teacher Program is sponsored by the Office of Academic Affairs and administered by the Associate Provost for Faculty Affairs.</p>
<p> </p><br><b>Bio:</b> <p>Jonathan Katz is a Professor in the Department of Computer Science at the University of Maryland. He is the Director of the Maryland Cybersecurity Center, the nexus for Cybersecurity research and education on campus. He has co-authored a popular textbook on cryptography, called Introduction to Modern Cryptography. His research focuses on cryptography, security, privacy, and theoretical computer science.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18622017-09-14T11:40:32-04:002017-09-14T11:40:32-04:00https://talks.cs.umd.edu/talks/1862Towards Evaluating the Robustness of Neural NetworksYigitcan Kaya - UMD CS<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, September 15, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p>Neural networks provide state-of-the-art results for most machine learning tasks. Unfortunately, neural networks are vulnerable to adversarial examples: given an input x and any target classification t, it is possible to find a new input x' that is similar to x but classified as t. This makes it difficult to apply neural networks in security-critical areas. Defensive distillation is a recently proposed approach that can take an arbitrary neural network, and increase its robustness, reducing the success rate of current attacks' ability to find adversarial examples from 95% to 0.5%. In this paper, we demonstrate that defensive distillation does not significantly increase the robustness of neural networks by introducing three new attack algorithms that are successful on both distilled and undistilled neural networks with 100% probability. Our attacks are tailored to three distance metrics used previously in the literature, and when compared to previous adversarial example generation algorithms, our attacks are often much more effective (and never worse). Furthermore, we propose using high-confidence adversarial examples in a simple transferability test we show can also be used to break defensive distillation. We hope our attacks will be used as a benchmark in future defense attempts to create neural networks that resist adversarial examples.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18652017-09-17T18:27:04-04:002017-09-17T18:27:04-04:00https://talks.cs.umd.edu/talks/1865Accessing Data while Preserving PrivacyGeorgios Kellaris<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, September 22, 2017, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p class="p1">We initiate a formal research of the privacy-efficiency tradeoff of secure database systems. Such systems, such as CryptDB and Cipher-base, try to mitigate the high costs of full-fledged cryptographic solutions by relaxing the security guarantees they provide. We provide abstract models that capture the basic properties of these systems and identify their fundamental leakage channels. These models allow performing a generic and implementation independent investigation of the inherent tradeoffs between security and efficiency. In particular, this modeling allows us in some cases to devise generic reconstruction attacks where the server learns the secret attributes of every record stored in the database, pointing to inherent limitations of these models.</p>
<p class="p2"> </p>
<p></p>
<p class="p1">We present a new model of differentially private storage where differential privacy is preserved even against an attacker that controls the data and the queries made to it. We give a generic construction of differentially private storage that combines ORAM and differentially private sanitizers. We also provide efficient constructions and lower bounds for some specific query sets. We have implemented some of our algorithms, and report on their efficiency. Joint work with George Kollios, Kobbi Nissim, and Adam O’Neill.</p><br><b>Bio:</b> <p></p>
<p class="p1">Georgios Kellaris is currently the co-founder of F-Lock, a data security startup, as part of TandemLaunch Inc. Before F-Lock, he was a postdoc jointly at CRCS, Harvard University, and at Boston University for 2 years. His research is focused on database privacy and security. His work targets at adapting theoretical research on security/privacy to real life scenarios. He received his Ph.D. degree in Computer Science and Engineering from the Hong Kong University of Science and Technology (2015) with the support of the Hong Kong Ph.D. Fellowship Scheme. He holds a B.Sc. in Informatics and Telecommunications from the University of Athens, Greece (2006) and an M.Sc. degree in Digital Systems from the University of Piraeus, Greece (2008). He has worked as a researcher at the University of Piraeus in Greece, the Singapore Management University, the Nanyang Technological University in Singapore, and at Boston University.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18742017-09-20T16:25:42-04:002017-09-20T16:26:36-04:00https://talks.cs.umd.edu/talks/1874Append-only Authenticated Dictionaries (AADs) and Their ApplicationsAlin Tomescu - MIT<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, October 27, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">We study "append-only" authenticated dictionaries (AADs) in a different setting where not one but multiple, mutually-distrusting clients update the dictionary via an untrusted server. This model, sometimes described as the n-party model, differs from the 2-party and 3-party models as it cannot assume a trusted source to compute authentication information.</p>
<p class="p1">Our clients' goal is to maintain a fork-consistent or "append-only" view of the dictionary, as they transition from an arbitrarily old authenticated digest of the dictionary to a newer one. Specifically, each client wants to ensure key-value pairs were only added to the new dictionary and old pairs were not removed nor changed.</p>
<p class="p1">Thus, the server should be able to compute an "append-only" proof that convinces clients this "append-only" property holds. The challenge is to construct a <em>small-sized</em> proof between arbitrary versions i and j of the dictionary.</p>
<p class="p1">We show a bandwidth-efficient but computationally-intensive construction based on constant-sized polynomial commitments (by Kate et al).</p>
<p class="p1">The main application for AADs is secure public-key distribution, where a public-key directory should not be able to remove public-key bindings from the directory, or else it can impersonate users without (efficient) detection. </p>
<p class="p1">We believe AADs could have other applications in authenticated logging-based systems such as encrypted file systems or cryptocurrencies.</p><br><b>Bio:</b> <p class="p1">Alin is a PhD candidate at MIT focusing on public-key distribution for secure communication.</p>
<p class="p1">His interests lie at the intersection of theory and practice: he enjoys applied cryptography, distributed systems and writing code. </p>
<p class="p1">In the past, Alin has worked on privacy-preserving file systems, private social networking and secure email.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18822017-09-27T14:57:14-04:002017-09-27T14:57:14-04:00https://talks.cs.umd.edu/talks/1882Confidante: Usable Encrypted Email – A Case Study With Lawyers and JournalistsWei Bai - UMD ECE<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, September 29, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p></p>
<p class="p1">Email encryption tools remain underused, even by people who frequently conduct sensitive business over email, such as lawyers and journalists. Usable encrypted email has remained out of reach largely because key management and verification remain difficult. However, key management has evolved in the age of social media: Keybase is a service that allows users to cryptographically link public keys to their social media accounts (e.g., Twitter), enabling key trust without out-of-band communication. We design and prototype Confidante, an encrypted email client that uses Keybase for automatic key management. We conduct a user study with 15 people (8 U.S. lawyers and 7 U.S. journalists) to evaluate Confidante’s design decisions. We find that users complete an encrypted email task more quickly and with fewer errors using Confidante than with an existing email encryption tool, and that many users report finding Confidante comparable to using ordinary email. However, we also find that lawyers and journalists have diverse operational constraints and threat models, and thus that there may not be a one-size-fits-all solution to usable encrypted email. We reflect on our findings – both specifically about Confidante and more generally about the needs and constraints of lawyers and journalists – to identify lessons and remaining security and usability challenges for encrypted email.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/18972017-10-05T01:49:50-04:002017-10-05T01:49:50-04:00https://talks.cs.umd.edu/talks/1897Hackers vs. Testers: A Comparison of Software Vulnerability Discovery ProcessesDaniel Votipka - UMD CS<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3400 A.V. Williams Building (AVW)</a><br>Friday, October 6, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">Identifying security vulnerabilities in software is a critical task that requires significant human effort. Currently, bug finding is often the responsibility of software testers before release and white-hat hackers (often within bug-bounty programs) afterward. This arrangement can be ad-hoc and far from ideal; for example, if testers could identify more vulnerabilities, software would be more secure at release time. Thus far, however, the processes used by each group — and how they compare to and interact with each other — have not been well studied. This paper takes a first step toward better understanding, and eventually improving, this ecosystem: we report on a semi-structured interview study (n=25) with both testers and hackers, focusing on how each group finds bugs, how they develop their skills, and the challenges they face. The results suggest that hackers and testers follow similar processes, but get different results due largely to differing experiences and therefore different underlying knowledge of security concepts. Based on these results, we provide recommendations to support improved security training for testers, better communication between hackers and developers, and smarter bug bounty policies to motivate hacker participation.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19022017-10-09T23:08:38-04:002017-10-09T23:08:38-04:00https://talks.cs.umd.edu/talks/1902 Towards Deep Learning Models Resistant to Adversarial AttacksRadu Marginean - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, October 11, 2017, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="font-family: 'Lucida Grande', helvetica, arial, verdana, sans-serif; font-size: 14.4px;">Recent work has demonstrated that neural networks are vulnerable to adversarial examples, i.e., inputs that are almost indistinguishable from natural data and yet classified incorrectly by the network. In fact, some of the latest findings suggest that the existence of adversarial attacks may be an inherent weakness of deep learning models. To address this problem, we study the adversarial robustness of neural networks through the lens of robust optimization. This approach provides us with a broad and unifying view on much of the prior work on this topic. Its principled nature also enables us to identify methods for both training and attacking neural networks that are reliable and, in a certain sense, universal. In particular, they specify a concrete security guarantee that would protect against any adversary. These methods let us train networks with significantly improved resistance to a wide range of adversarial attacks. They also suggest the notion of security against a first-order adversary as a natural and broad security guarantee. We believe that robustness against such well-defined classes of adversaries is an important stepping stone towards fully resistant deep learning models.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19062017-10-16T09:36:07-04:002017-10-16T09:36:07-04:00https://talks.cs.umd.edu/talks/1906Walkie-Talkie: An Efficient Defense Against Passive Website Fingerprinting AttacksMahmoud F. Sayed<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3258 A.V. Williams Building (AVW)</a><br>Wednesday, October 18, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">Website fingerprinting (WF) is a traffic analysis attack that allows an eavesdropper to determine the web activity of a client, even if the client is using privacy technologies such as proxies, VPNs, or Tor. Recent work has highlighted the threat of website fingerprinting to privacy-sensitive web users. Many previously designed defenses against website fingerprinting have been broken by newer attacks that use better classifiers. The remaining effective defenses are inefficient: they hamper user experience and burden the server with large overheads.</p>
<p class="p2"> </p>
<p></p>
<p class="p1">In this work we propose Walkie-Talkie, an effective and efficient WF defense. Walkie-Talkie modifies the browser to communicate in half-duplex mode rather than the usual full-duplex mode; half-duplex mode produces easily moldable burst sequences to leak less information to the adversary, at little additional overhead. Designed for the open-world scenario, Walkie-Talkie molds burst sequences so that sensitive and non-sensitive pages look the same. Experimentally, we show that Walkie-Talkie can defeat all known WF attacks with a bandwidth overhead of 31% and a time overhead of 34%, which is far more efficient than all effective WF defenses (often exceeding 100% for both types of overhead). In fact, we show that Walkie-Talkie cannot be defeated by any website fingerprinting attack, even hypothetical advanced attacks that use site link information, page visit rates, and intercell timing.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19132017-10-22T12:56:44-04:002017-10-22T12:56:44-04:00https://talks.cs.umd.edu/talks/1913Unconditional UC-Secure Computation with (Stronger-Malicious) PUFsSaikrishna Badrinarayanan - UCLA<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, November 17, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">In this talk, we explore the feasibility of UC-secure computation</p>
<p class="p1">using trusted hardware as setup - specifically, we focus on physically</p>
<p class="p1">unclonable functions (PUFs). Brzuska et al. (Crypto 2011) proved that</p>
<p class="p1">unconditional UC-secure computation is possible if parties have access</p>
<p class="p1">to honestly generated PUFs. Dachman-Soled et al. (Crypto 2014) then</p>
<p class="p1">showed how to obtain unconditional UC secure computation based on</p>
<p class="p1">malicious PUFs, assuming such PUFs are stateless. They also showed</p>
<p class="p1">that unconditional oblivious transfer is impossible against an</p>
<p class="p1">adversary that creates malicious stateful PUFs.</p>
<p class="p2"> </p>
<p class="p1">In this talk, we show how to go beyond this seemingly tight result, by</p>
<p class="p1">allowing any adversary to create stateful PUFs with a-priori bounded</p>
<p class="p1">state. This relaxes the restriction on the power of the adversary</p>
<p class="p1">(limited to stateless PUFs in previous feasibility results), therefore</p>
<p class="p1">achieving improved security guarantees. This is also motivated by</p>
<p class="p1">practical scenarios, where the size of a physical object may be used</p>
<p class="p1">to compute an upper bound on the size of its memory.</p>
<p class="p2"> </p>
<p class="p1">We then introduce a new security model where any adversary is allowed</p>
<p class="p1">to generate a malicious PUF that may encapsulate other (honestly</p>
<p class="p1">generated) PUFs within it, such that the outer PUF has oracle access</p>
<p class="p1">to all the inner PUFs. This is again a natural scenario, and in fact,</p>
<p class="p1">similar adversaries have been studied in the tamper-proof</p>
<p class="p1">hardware-token model (eg: Chandran et al. (Eurocrypt 2008)), but no</p>
<p class="p1">such notion has ever been considered with respect to PUFs. All</p>
<p class="p1">previous constructions of UC secure protocols suffer from explicit</p>
<p class="p1">attacks in this stronger model.</p>
<p class="p2"> </p>
<p class="p1">The talk is based on joint work with Dakshita Khurana, Rafail</p>
<p class="p1">Ostrovsky and Ivan Visconti., and will be based on the following</p>
<p> </p>
<p class="p1">paper: <a href="https://eprint.iacr.org/2016/636">https://eprint.iacr.org/2016/636</a>.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19202017-10-24T16:55:05-04:002017-10-24T16:55:05-04:00https://talks.cs.umd.edu/talks/1920CLKSCREW: Exposing the Perils of Security-Oblivious Energy ManagementMatthew Lentz - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3258 A.V. Williams Building (AVW)</a><br>Wednesday, October 25, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">The need for power- and energy-efficient computing has resulted in aggressive cooperative hardware-software energy management mechanisms on modern commodity devices. Most systems today, for example, allow software to control the frequency and voltage of the underlying hardware at a very fine granularity to extend battery life. Despite their benefits, these software-exposed energy management mechanisms pose grave security implications that have not been studied before.</p>
<p class="p2">In this work, we present the CLKSCREW attack, a new class of fault attacks that exploit the security-obliviousness of energy management mechanisms to break security. A novel benefit for the attackers is that these fault attacks become more accessible since they can now be conducted without the need for physical access to the devices or fault injection equipment. We demonstrate CLKSCREW on commodity ARM/Android devices. We show that a malicious kernel driver (1) can extract secret cryptographic keys from Trustzone, and (2) can escalate its privileges by loading self-signed code into Trustzone. As the first work to show the security ramifications of energy management mechanisms, we urge the community to re-examine these security-oblivious designs.</p>
<p class="p2">paper: <a style="font-family: 'Helvetica Neue'; font-size: 14px;" href="https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-tang.pdf">https://www.usenix.org/system/files/conference/usenixsecurity17/sec17-tang.pdf</a></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19252017-10-31T11:13:33-04:002017-10-31T11:14:34-04:00https://talks.cs.umd.edu/talks/1925Towards Attack-Resilient IoT and CPS Sensors: Attacks and Countermeasures<a href="http://www.ece.umd.edu/~yshoukry/">Yasser Shoukry - UMD ECE</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, November 3, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">The rapidly increasing dependence on IoT and Cyber-Physical Systems (CPS) in building critical infrastructure--in the context of smart cities, power grid, medical devices, and self-driving cars--has opened the gates to increasingly sophisticated and harmful attacks with financial, societal, criminal or political effects. While a traditional cyber attack may leak credit-card or other personal sensitive information, an IoT/CPS-attack can lead to a loss of control in nuclear reactors, gas turbines, power grid, transportation networks, and other critical infrastructure, placing the Nation's security, economy, and public safety at risk.</p>
<p>In the first part of this talk, I will focus on a problem known as "secure state estimation." It aims to estimate the state of a physical system when an adversary arbitrarily corrupts a subset of its sensors. Although of critical importance, this problem is NP-hard and combinatorial in nature since the subset of the attacked sensors is unknown. I will present a new Satisfiability Modulo Convex (SMC) procedure that uses a lazy combination of Boolean satisfiability solving and convex programming. I will argue that using multiple experimental and simulation results, SMC solvers outperform other techniques when used to solve the secure state estimation problem. In the second part of this talk, I will focus on the privacy-preserving sensor processing problem. In particular, I will introduce a localization system that combines partially homomorphic encryption with a new way of structuring the localization problem to enable e ffecient and accurate computation of a target’s location without requiring the sensors to make public their locations or measurements.</p><br><b>Bio:</b> <p style="margin: 0px 0px 20px; border: 0px; padding: 0px; font-size: 14px; color: #404040; font-family: 'Helvetica Neue', Arial, 'Liberation Sans', FreeSans, sans-serif;">Yasser Shoukry received his Ph.D. in Electrical Engineering from the University of California, Los Angeles in 2015 where he was affiliated with both the Cyber-Physical Systems Lab (supervised by Prof. Paulo Tabuada) as well as the Networked and Embedded Systems Lab (supervised by Prof. Mani Srivastava). He received the M.Sc. and the B.Sc. degrees (with distinction and honors) in Computer and Systems engineering from Ain Shams University, Cairo, Egypt in 2010 and 2007, respectively. <br><br>Between September 2015 and July 2017, Yasser was a joint post-doctoral associate at UC Berkeley, UCLA, and UPenn under the mentorship of Prof. George J. Pappas, Prof. Sanjit A. Seshia, and Prof. Paulo Tabuada. Before pursuing his Ph.D. at UCLA, he spent four years as an R&D engineer in the industry of automotive embedded systems. Yasser's research interests include the design and implementation of resilient cyber-physical systems by drawing on tools from embedded systems, formal methods, control theory, and machine learning. <br><br>Prof. Shoukry is the recipient of the Best Demo Award from the International Conference on Information Processing in Sensor Networks (IPSN) in 2017, the Best Paper Award from the International Conference on Cyber-Physical Systems (ICCPS) in 2016, the Distinguished Dissertation Award from UCLA EE department in 2016, the UCLA Chancellors prize in 2011/2012, the UCLA EE Graduate Division Fellowship in 2011/2012, and the UCLA EE Preliminary Exam Fellowship in 2012. In 2015, he led the UCLA/Caltech/CMU team to win the NSF Early Career Investigators (NSF-ECI) research challenge. His team represented the NSF- ECI in the NIST Global Cities Technology Challenge, an initiative designed to advance the deployment of Internet of Things (IoT) technologies within a smart city</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19302017-11-07T07:59:33-05:002017-11-07T07:59:33-05:00https://talks.cs.umd.edu/talks/1930Why Your Encrypted Database is Not SecurePaul Grubbs - Cornell Tech<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, November 10, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">Encrypted databases, which use specialized cryptography to support</p>
<p class="p1">efficient queries on encrypted data, are a popular approach to protecting</p>
<p class="p1">data from compromised database management systems. They have received a</p>
<p class="p1">great deal of interest from academic researchers and practitioners. This</p>
<p class="p1">talk will examine two ways in which recent encrypted databases are</p>
<p class="p1">vulnerable to attacks.</p>
<p class="p2"> </p>
<p class="p1">The first way is by using cryptography which makes an unsafe tradeoff of</p>
<p class="p1">security for functionality. To demonstrate this I will present new attacks</p>
<p class="p1">against order-revealing encryption, a primitive used in many encrypted</p>
<p class="p1">databases to enable searching and sorting on encrypted data. The attacks</p>
<p class="p1">recover as much as 99% of plaintexts.</p>
<p class="p2"> </p>
<p class="p1">The second way recent encrypted databases are vulnerable to attacks is by</p>
<p class="p1">making incorrect assumptions about the behavior of the underlying database</p>
<p class="p1">system. I will show how the "snapshot attack" threat model used to support</p>
<p class="p1">the security claims of many encrypted databases does not reflect the</p>
<p class="p1">information about past queries available in any snapshot attack on a real</p>
<p class="p1">database system.</p>
<p class="p2"> </p>
<p class="p1">Paper links: <a href="https://eprint.iacr.org/2016/895">https://eprint.iacr.org/2016/895</a> and</p>
<p></p>
<p class="p1"><a href="https://eprint.iacr.org/2017/468">https://eprint.iacr.org/2017/468</a></p><br><b>Bio:</b> <p>Paul Grubbs is a third-year PhD student at Cornell Tech, advised by Thomas Ristenpart. His research is in applied cryptography and security.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19362017-11-14T02:40:32-05:002017-11-14T02:40:32-05:00https://talks.cs.umd.edu/talks/1936Eleos: ExitLess OS Services for SGX EnclavesStephen Herwig<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3258 A.V. Williams Building (AVW)</a><br>Wednesday, November 15, 2017, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">Intel Software Guard eXtensions (SGX) enable secure and trusted execution of user code in an isolated enclave to protect against a powerful adversary. Unfortunately, running I/O-intensive, memory-demanding server applications in enclaves leads to significant performance degradation. Such applications put a substantial load on the in-enclave system call and secure paging mechanisms, which turn out to be the main reason for the application slowdown. In addition to the high direct cost of thousands-of-cycles long SGX management instructions, these mechanisms incur the high indirect cost of enclave exits due to associated TLB flushes and processor state pollution.</p>
<p class="p2"> </p>
<p class="p1">We tackle these performance issues in Eleos by enabling exit-less system calls and exit-less paging in enclaves. Eleos introduces a novel Secure User-managed Virtual Memory (SUVM) abstraction that implements application-level paging inside the enclave. SUVM eliminates the overheads of enclave exits due to paging, and enables new optimizations such as sub-page granularity of accesses.</p>
<p class="p2"> </p>
<p></p>
<p class="p1">We thoroughly evaluate Eleos on a range of microbenchmarks and two real server applications, achieving notable system performance gains. memcached and a face verifi- cation server running in-enclave with Eleos, achieves up to 2.2x and 2.3x higher throughput respectively while working on datasets up to 5x larger than the enclave’s secure physical memory.</p>
<p class="p1"> </p>
<p class="p1"></p>
<p class="p1">Link: <a href="https://0f675898-a-62cb3a1a-s-sites.googlegroups.com/site/silbersteinmark/Home/cr-eurosys17sgx.pdf">https://0f675898-a-62cb3a1a-s-sites.googlegroups.com/site/silbersteinmark/Home/cr-eurosys17sgx.pdf</a></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19532017-11-29T15:49:33-05:002017-11-29T15:49:33-05:00https://talks.cs.umd.edu/talks/1953Fighting Black Boxes, Adversaries, and Bugs in Deep Learning<a href="https://cs.stanford.edu/~pliang/">Percy Liang - Stanford</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3450 A.V. Williams Building (AVW)</a><br>Friday, December 1, 2017, 1:00-2:00 pm<br><br><b>Abstract:</b> <p>While deep learning has been hugely successful in producing highly accurate models, the resulting models are sometimes (i) difficult to interpret, (ii) susceptible to adversaries, and (iii) suffer from subtle implementation bugs due to their stochastic nature. In this talk, I will take some initial steps towards addressing these problems of interpretability, robustness, and correctness using some classic mathematical tools. First, influence functions from robust statistics can help us understand the predictions of deep networks by answering the question: which training examples are most influential on a particular prediction? Second, semidefinite relaxations can be used to provide guaranteed upper bounds on the amount of damage an adversary can do for restricted models. Third, we use the Lean proof assistant to produce a working implementation of stochastic computation graphs which is guaranteed to be bug-free.</p><br><b>Bio:</b> <p>Percy Liang is an Assistant Professor in the Computer Science and Statistics departments at Stanford.</p>
<p>His research focuses on developing trustworthy agents that can communicate effectively with people and improve over time through interaction. </p>
<p>He identifies himself with the machine learning (ICML, NIPS) and natural language processing (ACL, NAACL, EMNLP) communities.</p>
<p></p>
<p>More details could be found on his webpage: https://cs.stanford.edu/~pliang/</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19672018-01-15T11:04:34-05:002018-01-15T11:04:34-05:00https://talks.cs.umd.edu/talks/1967Doing Real Work with FHEShai Halevi<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Monday, January 22, 2018, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>I will describe our recent experience, building an FHE-based system for computing the coefficients of an approximate logistic-regression model. The aim of this project was to examine the feasibility of a solution that operates "deep within the bootstrapping parameter regime", solving a complicated system that cannot be addressed just by using a somewhat homomorphic scheme. Our solution can handle thousands of records and hundreds of fields, and it takes a few hours to run. In this presentation I will talk about the challenges of designing and implementing this solution, and about some of the optimizations that went into making it feasible.<br> <br> Time permitting, I will also talk about a number of other recent optimizations that we developed for homomorphic packed linear transformations (even though we ended up not using any of them in this project).<br> <br> based on joint work with Jack Lik Hon Crawford, Craig Gentry, Daniel Platt, and Victor Shoup</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19772018-01-24T11:30:15-05:002018-01-24T11:30:15-05:00https://talks.cs.umd.edu/talks/1977Using Efficient Oblivious Computation to Keep Data Private and Obfuscate ProgramsKartik Nayak - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, January 26, 2018, 11:00 am-1:00 pm<br><br><b>Abstract:</b> <p style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121;">Protecting sensitive user data and proprietary programs are f<span style="font-size: 13px;">undamental and important challenges.</span> For instance, when users outsource their private data to the cloud, they risk leakage of the <span style="font-size: 13px;">data in the event of a data breach; encrypting their data is not a </span>workable<span style="font-size: 13px;"> solution since it impedes the cloud provider’s ability to </span><span style="font-size: 13px;">offer user-specific services. When companies </span>execute<span style="font-size: 13px;"> proprietary </span><span style="font-size: 13px;">programs on third-party cloud providers, they similarly face the risk </span><span style="font-size: 13px;">of leaking trade secrets.</span></p>
<p><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121; font-size: 13px;">In this talk, I will discuss efficient data-oblivious computation and show </span><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121; font-size: 13px;">how it can be applied to address each of the above. In particular, I will </span><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121; font-size: 13px;">introduce GraphSC, an efficient, parallel, secure-computation </span><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121; font-size: 13px;">framework for running data-mining algorithms on private user data that </span><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121; font-size: 13px;">allows programmers to express computation tasks using the familiar </span><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121; font-size: 13px;">GraphLab abstraction. I will then present HOP, a secure processor designed </span><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121; font-size: 13px;">to obfuscate proprietary programs. I will conclude with an overview of</span><span style="font-family: -webkit-standard; text-size-adjust: auto; color: #212121;"> my other ongoing and future research on privacy-preserving computation and blockchains.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/19792018-01-24T11:47:27-05:002018-01-24T11:47:27-05:00https://talks.cs.umd.edu/talks/1979How to Share a Secret: Infinitely, Dynamically and RobustlyIlan Komargodski - Cornell Tech<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 16, 2018, 12:00-1:00 pm<br><br><b>Abstract:</b> <p>Secret sharing schemes allow a dealer to distribute a secret piece of information among several parties such that only qualified subsets of parties can reconstruct the secret. The collection of qualified subsets is called an access structure. The best known example is the k-threshold access structure, where the qualified subsets are those of size at least k. When k=2 and there are n parties, there are schemes where the size of the share each party gets is roughly log(n) bits, and this is tight even for secrets of 1 bit. In these schemes, the number of parties n must be given in advance to the dealer.</p>
<p>We consider the case where the set of parties is not known in advance and could potentially be infinite. Our goal is to give the t-th party arriving a small share as possible as a function of t. We present a scheme for general access structures and several schemes for variants of the k-threshold access structure in which at any point in time some bounded number of parties can recover the secret. Lastly, we discuss other classical notions such as robustness, and adapt them to the unbounded setting.</p>
<p>The talk is based on joint works with Moni Naor and Eylon Yogev, and with Anat Paskin-Cherniavsky.</p>
<p> </p><br><b>Bio:</b> <p>Ilan Komargodski is a postdoctoral researcher at Cornell Tech, hosted by Prof. Rafael Pass and Prof. Elaine Shi.</p>
<p>He completed his Ph.D. at the Weizmann Institute of Science, where he was fortunate to have Prof. Moni Naor as his advisor. He received a M.Sc. also at the Weizmann Institute under the guidance of Prof. Ran Raz.</p>
<p>He is interested in foundations of theoretical computer science, with an emphasis on cryptography and its interplay with complexity theory. </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20092018-01-31T14:08:37-05:002018-01-31T15:38:16-05:00https://talks.cs.umd.edu/talks/2009Formal reasoning for AWS cloud security<a href="http://www0.cs.ucl.ac.uk/staff/b.cook/">Byron Cook - Director, Automated Reasoning Group, Amazon Web Services</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 9, 2018, 1:30-2:30 pm<br><br><b>Abstract:</b> <p>I describe the use of formal verification tools within Amazon Web Services to further ensure the security of its customers. We discuss some accomplishments, describe some of the challenges of operationalizing proof, muse on lessons learned, and outline some ideas for future research.</p><br><b>Bio:</b> <p>Byron Cook leads AWS Security’s Automated Reasoning Group which develops and applies constraint/logic based automated tools for proving the correctness of software, network configurations, and policies. Prior to joining AWS, Byron was for 10 years a researcher at Microsoft Research, where he worked in the areas of functional programming, hardware modeling and design, SAT-solving, symbolic model checking for finite-state systems, decision procedures, automatic program verification and analysis, and the analysis of biological systems. Byron’s research in automatic program verification has gained significant recognition (e.g. a substantial publication record, numerous keynote speaker invitations, and press hits in Scientific American, Science, Vogue, Financial Times, Economist, and Wired). Byron is particularly well known for his work on automatic methods for proving program termination, notably as part of the Terminator termination prover. This work represented a breakthrough, challenging the prevailing opinion in computer science that automatic termination proving is impossible. Byron is also well known for his contributions to Microsoft's SLAM project, which is often credited as a catalyst for the revival of automatic program verification research.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20102018-01-31T21:00:54-05:002018-01-31T21:00:54-05:00https://talks.cs.umd.edu/talks/2010Security and Privacy of Outsourced Data and ComputationsYupeng Zhang - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3450 A.V. Williams Building (AVW)</a><br>Thursday, February 1, 2018, 3:00-4:00 pm<br><br><b>Abstract:</b> <div>
<div>Nowadays many users outsource their data and computation to cloud-service providers such as Amazon EC2, Google Cloud, and Microsoft Azure that are potentially untrusted or may be compromised. Meanwhile, companies are collecting more and more data from users so as to run machine-learning algorithms on that data to develop products and services. Despite of the great benefits of these techniques, they currently require users to give up control of their data and to trade off privacy for utility.</div>
<div> </div>
<div>I will discuss several cryptographic techniques I have developed to address these issues. I will first talk about techniques for verifiable storage and computation that can be used to ensure the correctness of computations done in the cloud and services offered by cloud providers. I will then discuss privacy-preserving machine learning, which allows companies to execute machine-learning algorithms without learning users’ data. I will conclude with some thoughts on future applications of these new protocols to other domains.</div>
</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20272018-03-07T10:09:38-05:002018-03-07T10:09:38-05:00https://talks.cs.umd.edu/talks/2027NDSS Symposium 2018 Paper DiscussionsMC2 Grad Students - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, March 9, 2018, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">We will discuss 6 papers among these published at NDSS 2018. Each presenter will introduce the paper contributions and moderate a discussion around it. We will allocate 10 minutes for each paper.</p>
<p class="p2"> </p>
<p class="p1">The full list of papers that were published at NDSS could be found here: <a href="https://www.ndss-symposium.org/ndss2018/programme/">https://www.ndss-symposium.org/ndss2018/programme/</a></p>
<p class="p2"> </p>
<p class="p1">The list of presented papers and their moderators:</p>
<p class="p2"> </p>
<p class="p1">• Trojaning Attack on Neural Networks. - Yigitcan Kaya</p>
<p class="p1">• Towards a Timely Causality Analysis for Enterprise Security. - Daniel Votipka</p>
<p class="p1">• Automated website fingerprinting through deep learning. - Ziyun Zhu</p>
<p class="p1">• Revisiting Private Stream Aggregation: Lattice-Based PSA. - Mukul Kulkarni</p>
<p class="p1">• Investigating Ad Transparency Mechanisms in Social Media: A Case Study of Facebooks Explanations. - Wei Bai</p>
<p class="p1">• Cloud Strife: Mitigating the Security Risks of Domain-Validated Certificates. - Doowon Kim</p>
<p class="p2"> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20182018-02-20T13:18:18-05:002018-02-20T13:18:18-05:00https://talks.cs.umd.edu/talks/2018Secure Computation with Low Communication from Cross-checkingSamuel Ranellucci - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, February 23, 2018, 12:00-1:00 pm<br><br><b>Abstract:</b> <p>We construct new four party protocols for secure computation that are secure against a single malicious corruption. </p>
<p>Our protocols can perform computations over a binary ring, and require sending just 2 ring elements per party, per gate.</p>
<p>In the special case of Boolean circuits, this amounts to sending 2 bits per party, per gate. </p>
<p>One of our protocols is robust, yet requires almost no additional communication. </p>
<p>Our construction can be viewed as a variant of the ``dual execution'' approach, but, because we rely on four parties instead of two, we can avoid any leakage, achieving the standard notion of security with an honest majority against a malicious adversary.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20352018-03-28T17:17:45-04:002018-03-28T17:17:45-04:00https://talks.cs.umd.edu/talks/2035Protecting Privacy & Guaranteeing Generalization by Controlling InformationThomas Steinke - IBM Almaden<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, April 4, 2018, 10:00-11:00 am<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline;">As data is being more widely collected and used, privacy and statistical validity are becoming increasingly difficult to protect. Sound solutions are needed, as ad hoc approaches have resulted in several high-profile failures.</span><br style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;"><br style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;"><span style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline;">In this talk, I will illustrate how privacy can be unwittingly compromised -- i.e., sensitive information can be leaked by seemingly innocuous "anonymized" or aggregate data. I will then show how differential privacy avoids these pitfalls. Differential privacy is an information-theoretic notion of algorithmic stability that provides a framework for measuring the leakage of private information and, most importantly, how this information accumulates over multiple uses of an individual's data. This allows us to design algorithms to perform sophisticated statistical analyses, while providing robust privacy guarantees.</span><br style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;"><br style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;"><span style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline;">Privacy turns out to be intimately related to generalization in machine learning. In particular, a differentially private algorithm is guaranteed to not "overfit" its data, meaning that any statistical conclusions extend to the underlying distribution from which the data was drawn. I will discuss this connection and explain how it is especially useful for adaptive data analysis, namely when one dataset is used over and over again and each successive analysis is informed by the outcome of previous analyses.</span></p><br><b>Bio:</b> <p><span style="color: #222222; font-family: arial,sans-serif; font-size: 12.8px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial; float: none; display: inline;">Thomas <span class="il">Steinke</span> is a postdoctoral researcher at the IBM Almaden Research Center in San Jose, California. In 2016, he graduated from Harvard University with a PhD in Computer Science advised by Salil Vadhan and prior to that he completed a MSc and a BSc(Hons) at the University of Canterbury in New Zealand. His research interests include providing rigorous tools for privacy-preserving data analysis and statistically valid adaptive data analysis, as well as pseudorandomness.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20372018-04-04T10:30:35-04:002018-04-04T10:30:35-04:00https://talks.cs.umd.edu/talks/2037ChainSmith: Automatically Learning the Semantics of Malicious Campaigns by Mining Threat Intelligence ReportsZiyun Zhu - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 6, 2018, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">Modern cyber attacks consist of a series of steps and are generally part of larger campaigns. Large-scale field data provides a quantitative measurement of these campaigns.</p>
<p class="p1">On the other hand, security practitioners extract and report qualitative campaign characteristics manually. Linking the two sources provides new insights about attacker strategies from measurements. However, this is a time-consuming task because qualitative measurements are generally reported in natural language and are not machine-readable.</p>
<p class="p2"> </p>
<p class="p1">We propose an approach to bridge measurement data with manual analysis. We borrow the idea from threat intelligence: we define campaigns using a 4-stage model, and describe each stage using IOCs (indicators of compromise), e.g. URLs and IP addresses. We train a multi-class classifier to extract IOCs and further categorize them into different stages. We implement these ideas in a system called ChainSmith. Our system can achieve 91.9% precision and 97.8% recall in extracting IOCs,</p>
<p class="p1">and can determine the campaign roles for 86.2% of IOCs with 78.2% precision and 80.7% recall. We run ChainSmith on 14,155 online security articles, from which we collect 24,653 IOCs. The semantic roles allow us to link manual attack analysis with large scale field measurements. In particular, we study the effectiveness of different persuasion techniques used on enticing user to download the payloads. We find that the campaign usually starts from social engineering and “missing codec” ruse is a common persuasion technique that generates the most suspicious downloads each day.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20382018-04-05T14:30:21-04:002018-04-05T14:30:21-04:00https://talks.cs.umd.edu/talks/2038Cybersecurity Threats to U.S. ElectionsAlex Halderman - University of Michigan<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=KEB">1110 Jeong H. Kim Engineering Building (KEB)</a><br>Monday, April 9, 2018, 11:00-11:59 am<br><br><b>Abstract:</b> <div>
<div>Strengthening election cybersecurity is essential for safeguarding American democracy, and it’s an increasingly urgent task. Despite 15 years of research demonstrating critical security weaknesses, most of the country continues to use vulnerable electronic voting machines, and the landscape of threats from cybercriminals and nation-state attackers has grown increasingly hostile.</div>
</div>
<div>
<div> </div>
</div>
<div>
<div>In this talk, I will explain how cyberattacks on voting infrastructure threaten the integrity of U.S. elections. Sophisticated attackers can infiltrate electronic voting machines and silently alter results in swing states, potentially changing the outcome of a national election. Such attacks do not require voting machines to be connected to the Internet, and the technical capabilities are well within reach for hostile foreign governments. To illustrate this threat, I will demonstrate an attack on a real voting machine of a type still used in 20 states, including, until recently, Maryland.</div>
</div>
<div>
<div> </div>
</div>
<div>
<div>Researchers have developed practical safeguards that can robustly defend our elections, but only a handful of states have deployed them so far, due to a lack of resources and political will. Fortunately, Congress recently appropriated $380M in new funding for the states—including $7M for Maryland—to strengthen election security. I’ll explain how Maryland and other states can use this funding wisely, and what computer scientists and other citizens can do to help.</div>
</div><br><b>Bio:</b> <p>Alex <span class="il">Halderman</span> is Professor of Computer Science & Engineering at the University of Michigan. His research spans computer and network security, applied cryptography, security measurement, censorship resistance, and electronic voting, as well as the interaction of technology with politics and international affairs. His recent projects include ZMap, Let’s Encrypt, and the TLS Logjam and DROWN vulnerabilities. Prof. <span class="il">Halderman</span> has performed numerous security evaluations of real-world voting systems, both in the U.S. and around the world. After the 2016 U.S. presidential election, he advised recount initiatives in Michigan, Wisconsin, and Pennsylvania in an effort to help detect and deter cyberattacks, and in 2017 he testified to the Senate Intelligence Committee about cybersecurity threats to election infrastructure. He was named by Popular Science as one of the “brightest young minds reshaping science, engineering, and the world.”</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/2">CS Department</a> ⋅ <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20412018-04-08T18:03:15-04:002018-04-08T18:03:15-04:00https://talks.cs.umd.edu/talks/2041Improved Stock-Market Auctions Using Secure ComputationCharanjit Jutla - IBM TJ Watson<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Tuesday, April 24, 2018, 2:00-3:00 pm<br><br><b>Abstract:</b> <p><span style="font-family: Verdana;">Stock markets have two primary functions: providing liquidity and price discovery. While the market micro-structure was mostly ignored or assumed to function ideally for the purpose of asset pricing, O'Hara (Journal of Finance, 2003) has established that both liquidity provision and price discovery negatively affect asset pricing and returns. In this talk we propose using cryptography, and in particular secure multi-party computation (MPC), to set up a novel stock-market structure that, to a large extent, removes the negative consequences of liquidity costs and periodic price discovery. Interestingly, the proposed market structure takes us back to the early days of stock markets, i.e., periodic-call markets, but with the not-so-trusted auctioneer replaced by multiple parties running MPC with no individual party (or minor coalition) learning the order-book.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20482018-04-18T07:57:42-04:002018-04-18T07:57:42-04:00https://talks.cs.umd.edu/talks/2048Resilient Computing and Adaptive Fault Tolerance<a href="http://homepages.laas.fr/fabre/Site/Homepage.html">Jean-Charles Fabre - Institut National Polytechnique de Toulouse</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, April 20, 2018, 12:00-1:00 pm<br><br><b>Abstract:</b> <p class="p1">Evolution of systems during their operational life is mandatory and both updates and upgrades should not impair their dependability properties. Dependable systems must evolve to accommodate changes, such as new threats and undesirable events, application updates or variations in available resources. A system that remains dependable when facing changes is called resilient. In this talk, we present an innovative approach taking advantage of component-based software engineering technologies for tackling the on-line adaptation of fault tolerance mechanisms. The development process relies on two key factors: designing fault tolerance mechanisms for adaptation and leveraging component-based middleware enabling fine-grained control and modification of the software architecture at runtime. We describe the principles and methodology for the development of adaptive fault tolerance mechanisms. We discuss application of these ideas in the context of automotive embedded systems and will also introduce some measures to quantify resilience.</p><br><b>Bio:</b> <p>Jean-Charles Fabre is a Professor at the Institut National Polytechnique de Toulouse.</p>
<p>He obtained his Master of Science in 1979 and the Ph.D. in Computer Science in 1982 from the</p>
<p>University of Toulouse. He also obtained a Senior Researcher Diploma (HDR1) in 1992, based on his past research</p>
<p>achievements, the top-level degree of the French education system.</p>
<p>Working in Fault Tolerant Computing for more than 30 years, he was first researcher of the THOMSON Central</p>
<p>Research Lab in Paris and then involved in the Chorus project at INRIA (National Research Institute in Automatics</p>
<p>and Informatics of the French Ministry of Industry) and responsible for the design and the implementation of fault</p>
<p>tolerance strategies in the Chorus distributed architecture.</p>
<p>After a short period with the National Space Centre, he has been with the LAAS-CNRS (Laboratory for Analysis</p>
<p>and Architecture of Systems) since 1984 in Toulouse-France working in the « Dependable Computing and Fault</p>
<p>Tolerance » research group. His past and current interests concern distributed operating systems and algorithms,</p>
<p>dependable computing, reflective and resilient computing systems.</p>
<p>Formerly researcher at INRIA and at CNRS, he has been Research Director at CNRS. Since 2003, he his Professor</p>
<p>of the Institut National Polytechnique de Toulouse and became full Professor in December 2006. He was promoted</p>
<p>in 2016 to the last position of the faculty ranks in French universities.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20692018-05-02T17:32:01-04:002018-05-02T17:32:01-04:00https://talks.cs.umd.edu/talks/2069Hackers vs. Testers: A Comparison of SoftwareVulnerability Discovery ProcessesDaniel Votipka - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Thursday, May 10, 2018, 12:30-1:30 pm<br><br><b>Abstract:</b> <p>Identifying security vulnerabilities in software is a critical task that requires significant human effort. Currently, vulnerability discovery is often the responsibility of software testers before release and white-hat hackers (often within bug bounty programs) afterward. This arrangement can be ad-hoc and far from ideal; for example, if testers could identify more vulnerabilities, software would be more secure at release time. Thus far, however, the processes used by each group — and how they compare to and interact with each other — have not been well studied. This paper takes a first step toward better understanding, and eventually improving, this ecosystem: we report on a semi-structured interview study (n=25) with both testers and hackers, focusing on how each group finds vulnerabilities, how they develop their skills, and the challenges they face. The results suggest that hackers and testers follow similar processes, but get different results due largely to differing experiences and therefore different underlying knowledge of security concepts. Based on these results, we provide recommendations to support improved security training for testers, better communication between hackers and developers, and smarter bug bounty policies to motivate hacker participation.</p>
<p> </p>
<p>Paper:</p>
<p> </p>
<p></p>
<p></p>
<p><a href="http://legacydirs.umiacs.umd.edu/~dvotipka/papers/VotipkaHackerTesters2018.pdf">Hackers vs. Testers: A Comparison of SoftwareVulnerability Discovery Processes</a></p>
<p>Daniel Votipka, Rock Stevens, Elissa M. Redmiles, Jeremy Hu, and Michelle M. Mazurek</p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20742018-05-08T10:00:54-04:002018-05-08T10:00:54-04:00https://talks.cs.umd.edu/talks/2074Characterizing the Space of Adversarial Examples in Machine Learning<a href="https://www.papernot.fr/">Nicolas Papernot - Pennsylvania State University</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Wednesday, May 16, 2018, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p class="p1">There is growing recognition that machine learning (ML) exposes new security and privacy vulnerabilities in software systems, yet the technical community's understanding of the nature and extent of these vulnerabilities remains limited but expanding. In this talk, I explore the threat model space of ML algorithms, and systematically explore the vulnerabilities resulting from the poor generalization of ML models when they are presented with inputs manipulated by adversaries. This characterization of the threat space prompts an investigation of defenses that exploit the lack of reliable confidence estimates for predictions made. In particular, we introduce a promising new approach to defensive measures tailored to the structure of deep learning. Through this research, we expose connections between the resilience of ML to adversaries, model interpretability, and training data privacy.</p><br><b>Bio:</b> <p class="p1">Nicolas Papernot earned his PhD in Computer Science and Engineering working with Professor Patrick McDaniel at the Pennsylvania State University. His research interests lie at the intersection of computer security, privacy and machine learning. He is supported by a Google PhD Fellowship in Security and received a best paper award at ICLR 2017. He is also the co-author of CleverHans, an open-source library widely adopted in the technical community to benchmark machine learning in adversarial settings. In 2016, he received his M.S. in Computer Science and Engineering from the Pennsylvania State University and his M.S. in Engineering Sciences from the Ecole Centrale de Lyon.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20842018-06-23T07:22:20-04:002018-06-23T07:22:20-04:00https://talks.cs.umd.edu/talks/2084Quantitative information-flow tracking using symbolic execution and statistically-guided model counting<a href="https://www-users.cs.umn.edu/~smccaman/">Stephen McCamant - University of Minnesota</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Monday, August 13, 2018, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Quantitative information-flow (QIF) analysis provides a measure of the </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">amount of information revealed when a program runs. For instance, if a </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">program operates on secret (private, confidential, etc.) data, QIF </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">measures in bits of the amount of information about that secret that </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">can be inferred from the program's outputs or other observable </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">behaviors. One class of powerful techniques for this analysis starts </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">by symbolically executing software to characterize its input-output </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">relation as a formula, and then uses model counting applied to such a </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">formula to provide an information-flow estimate. Model counting is </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">the problem of counting the number of solutions to a logical </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">formula. Even approximate model counting can be quite expensive; I'll </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">describe a technique we've developed to speed up hashing-based </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">approximate model counting by using a statistical model to choose </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">queries (TACAS 2018). Then I'll put this in a broader context using </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">binary symbolic execution to perform QIF analysis of complete </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">programs. To improve the scalability of this measurement to larger </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">programs, I'll describe our ongoing work on a hybrid approach that </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">uses precise-but-expensive model counting to improve the precision of </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">a scalable-but-conservative approach based on network flow capacity </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">(PLDI 2008).</span></p>
<p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">(research joint with Seonmo Kim and Navid Emamdoost, UMN)</span></p><br><b>Bio:</b> <p><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">Stephen McCamant has been an Assistant Professor of Computer Science </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">and Engineering at the University of Minnesota since the fall of 2012, </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">where his main research area is program analysis for software security </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">and correctness. He is especially interested in binary code analysis </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">and transformation, hybrid dynamic/static techniques and symbolic </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">execution, information flow/taint analysis, and applications of </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">decision procedures. His research on software-based fault isolation </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">was a key foundation for the Google Native Client system, and he is </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">the primary author of the FuzzBALL binary symbolic execution system </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">which participated in the DARPA Cyber Grand Challenge and is available </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">open-source. He received his Ph.D from MIT in 2008, and from </span><span style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;">2008-2012 he was a postdoc at UC Berkeley.</span></p>
<div class="yj6qo" style="color: #222222; font-family: arial, sans-serif; font-size: 12.8px;"> </div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/20982018-08-04T14:36:21-04:002018-08-04T14:36:21-04:00https://talks.cs.umd.edu/talks/2098DeepBugs: A Learning Approach to Name-based Bug Detection<a href="http://software-lab.org/people/Michael_Pradel.html">Michael Pradel - TU Darmstadt</a><br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Tuesday, August 14, 2018, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>Natural language elements in source code, e.g., the names of variables and functions, convey useful information. However, most existing bug detection tools ignore this information and therefore miss some classes of bugs. The few existing name-based bug detection approaches reason about names on a syntactic level and rely on manually designed and tuned algorithms to detect bugs. This talk presents DeepBugs, a learning approach to name-based bug detection, which reasons about names based on a semantic representation and which automatically learns bug detectors instead of manually writing them. We formulate bug detection as a binary classification problem and train a classifier that distinguishes correct from incorrect code. To address the challenge that effectively learning a bug detector requires examples of both correct and incorrect code, we create likely incorrect code examples from an existing corpus of code through simple code transformations. A novel insight learned from our work is that learning from artificially seeded bugs yields bug detectors that are effective at finding bugs in real-world code. We implement our idea into a framework for learning-based and name-based bug detection. Three bug detectors built on top of the framework detect accidentally swapped function arguments, incorrect binary operators, and incorrect operands in binary operations. Applying the approach to a corpus of 150,000 JavaScript files yields bug detectors that have a high accuracy (between 89% and 95%), are very efficient (less than 20 milliseconds per analyzed file), and reveal 102 programming mistakes (with 68% true positive rate) in real-world code.</p><br><b>Bio:</b> <p>Michael Pradel is an assistant professor at TU Darmstadt, which he joined after a PhD at ETH Zurich and a post-doc at UC Berkeley. His research interests span software engineering, programming languages, security, and machine learning, with a focus on tools and techniques for building reliable, efficient, and secure software. In particular, he is interested in dynamic program analysis, test generation, concurrency, performance profiling, JavaScript-based web applications, and machine learning-based program analysis.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/21532018-09-28T13:58:50-04:002018-10-01T18:51:25-04:00https://talks.cs.umd.edu/talks/2153USENIX Security 2018 Lightning TalksMC2 Grad Students - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, October 5, 2018, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p class="p1">We will discuss 5 papers among these published at USENIX2018. Each presenter will introduce the paper contributions and moderate a discussion around it. We will allocate 12 minutes for each paper. Lunch will be provided.</p>
<p class="p1">The full list of papers that were published at USENIX could be found here: https://www.usenix.org/conference/usenixsecurity18/technical-sessions</p>
<p class="p1">The list of presented papers and their moderators:</p>
<p class="p1">• Fear the Reaper: Characterization and Fast Detection of Card Skimmers - Yigitcan Kaya (https://www.usenix.org/conference/usenixsecurity18/presentation/scaife)</p>
<p class="p1">• Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution - Erin Avllazagaj (https://www.usenix.org/conference/usenixsecurity18/presentation/bulck)</p>
<p class="p1">• Turning Your Weakness Into a Strength: Watermarking Deep Neural Networks by Backdooring- Sanghyun Hong (https://www.usenix.org/conference/usenixsecurity18/presentation/adi)</p>
<p class="p1">• AttriGuard: A Practical Defense Against Attribute Inference Attacks via Adversarial Machine Learning - Virinchi Srinivas (https://www.usenix.org/conference/usenixsecurity18/presentation/jia-jinyuan)</p>
<p class="p1">• End-to-End Measurements of Email Spoofing Attacks - Militaru Cristian (https://www.usenix.org/conference/usenixsecurity18/presentation/hu)</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/21542018-09-28T15:18:48-04:002018-10-05T07:11:39-04:00https://talks.cs.umd.edu/talks/2154SoK: Security and Privacy in Machine LearningNeal Gupta - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, October 12, 2018, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>In this talk, Neal Gupta will present us an SoK paper on adversarial machine learning (AML). The paper SoK: Security and Privacy in Machine Learning, by Papernot et al. is presented in Euro S&P2018, and it constitutes a great overview of the research in AML and provides a categorization of the attacks and defenses proposed so far. Adversarial machine learning is an emerging hot topic and I would recommend everyone to attend to the talk. Lunch will be provided.</p>
<p> </p>
<p>Advances in machine learning (ML) in recent years have enabled a dizzying array of applications such as data analytics, autonomous systems, and security diagnostics. ML is now pervasive—new systems and models are being deployed in every domain imaginable, leading to widespread deployment of software based inference and decision making. There is growing recognition that ML exposes new vulnerabilities in software systems, yet the technical community’s understanding of the nature and extent of these vulnerabilities remains limited. We systematize findings on ML security and privacy, focusing on attacks identified on these systems and defenses crafted to date. We articulate a comprehensive threat model for ML, and categorize attacks and defenses within an adversarial framework. Key insights resulting from works both in the ML and security communities are identified and the effectiveness of approaches are related to structural elements of ML algorithms and the data used to train them. In particular, it is apparent that constructing a theoretical understanding of the sensitivity of modern ML algorithms to the data they analyze, a la PAC theory, will foster a science of security and privacy in ML.</p><br><b>Bio:</b> <p>Neal Gupta is a PhD student in Computer Science. From 2011-12 he was a Master's student in Economics at the London School of Economics, and he has an undergraduate degree in Applied Mathematics from Harvard College. His research interests include mathematical network analysis, applied mathematical optimization, and automated planning.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/21872018-10-26T12:21:34-04:002018-11-07T11:20:27-05:00https://talks.cs.umd.edu/talks/2187CCS2018 Talks from the MC2 FacultyProf. Katz and Prof. Hicks - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, November 9, 2018, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p class="p1">We will discuss 2 papers that were published at the CCS2018 by the University of Maryland researchers. Our faculty members in the Maryland Cybersecurity Center (MC2), Prof. Jonathan Katz and Prof. Michael Hicks, will give talks on the papers their groups published. These papers are:</p>
<p class="p1">- <em>Improved Non-Interactive Zero Knowledge with Applications to Post-Quantum Signatures</em> by <strong>Jonathan Katz </strong>(University of Maryland), Vladimir Kolesnikov (Georgia Tech), Xiao Wang (University of Maryland). --- available at https://eprint.iacr.org/2018/475.pdf</p>
<p class="p1">- <em>Evaluating Fuzz Testing </em>by George Klees (University of Maryland), Andrew Ruef (University of Maryland), Benji Cooper (University of Maryland), Shiyi Wei (University of Texas at Dallas), <strong>Michael Hicks </strong>(University of Maryland). --- available at https://arxiv.org/pdf/1808.09700.pdf</p>
<p class="p1">Talks will be 30 minutes each including the presentation and the discussion. Please join us to learn about the recent cutting-edge research projects in our lab from the leading experts of their fields. </p>
<p class="p1">For a light lunch, we will also have organic sandwiches, appetizers and drinks.</p><br><b>Bio:</b> <p>You can find the personal websites of Prof. Katz and Prof. Hicks here: https://www.cs.umd.edu/~jkatz/ and http://www.cs.umd.edu/~mwh/</p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/22172018-12-04T16:58:14-05:002018-12-07T08:25:00-05:00https://talks.cs.umd.edu/talks/2217Talks from Maryland Cybersecurity Center ResearchersProf. Papamanthou, Wei Bai and Kelsey Fulton - UMD<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3460 A.V. Williams Building (AVW)</a><br>Friday, December 7, 2018, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <div class="abstract-body">
<div class="gs" style="margin: 0px; padding: 0px 0px 20px; width: 998.542px; color: #222222; font-family: Roboto, RobotoDraft, Helvetica, Arial, sans-serif; font-size: medium;">
<div class="">
<div id=":17a" class="ii gt" style="font-size: 12.8px; direction: ltr; margin: 8px 0px 0px; padding: 0px; position: relative;">
<div id=":17b" class="a3s aXjCH " style="overflow: hidden; font-variant-numeric: normal; font-variant-east-asian: normal; font-stretch: normal; font-size: small; line-height: 1.5; font-family: Arial, Helvetica, sans-serif;">
<div dir="ltr">
<div dir="ltr">
<div dir="auto">
<div dir="auto" style="font-family: sans-serif; font-size: 12.8px;">Schedule Updated: We now have three talks instead of two!</div>
<div dir="auto">
<p style="font-family: sans-serif; font-size: 12.8px;">This week, in the last MC2 reading group meeting of 2018, we will have three speakers presenting their work on blockchain and human security behavior. For an early lunch, we will also have organic sandwiches, appetizers and drinks.</p>
<p style="font-family: sans-serif; font-size: 12.8px;">The first talk will be from Prof. Babis Papamanthou on 'Applications of Verifiable Computation in Blockchains and Cryptocurrencies'. Prof. Papamanthou will gave his keynote talk in Symposium on Foundations and Applications of Blockchain 2018. The talk will cover how protocols for verifiable computation can address the scalibility, security and privacy concerns regarding blockchain. (Find more details here: <a style="color: #4285f4; text-decoration-line: none;" href="https://scfab.github.io/2018/keynotes.html" rel="noopener noreferrer noreferrer noreferrer">https://scfab.github.io/<wbr></wbr>2018/keynotes.html</a>)</p>
<p><span style="font-family: sans-serif;"><span style="font-size: 12.8px;">The second talk will be from Wei Bai. This talk will cover challenges of using end-to-end encryption correctly by the non-expert users. Wei's</span></span><span style="font-family: sans-serif; font-size: 12.8px;"> paper takes a first step toward providing high-level, roughly correct information about end-to-end encryption to non-experts. (Find their Euro S&P paper here: </span><span style="font-family: sans-serif;"><span style="font-size: 12.8px;"><a style="color: #1155cc;" href="https://ece.umd.edu/~wbai/assets/papers/eurosp18.pdf" rel="noopener">https://ece.umd.edu/~<wbr></wbr>wbai/assets/papers/eurosp18.<wbr></wbr>pdf</a>)</span></span></p>
<p style="font-family: sans-serif; font-size: 12.8px;">The final talk will be from Kelsey Fulton. In this talk, Kelsey will cover their work on what users have learned about computer security from mass media and how they evaluate what is and isn't realistic within fictional portrayals.</p>
<p style="font-family: sans-serif; font-size: 12.8px;">We are ending the year with three great talks, accessible for the general audience. Looking forward to see you all there!</p>
</div>
</div>
<div class="yj6qo"> </div>
<div class="adL"> </div>
</div>
</div>
<div class="adL"> </div>
</div>
</div>
<div class="hi" style="border-bottom-left-radius: 1px; border-bottom-right-radius: 1px; padding: 0px; width: auto; background: #f2f2f2; margin: 0px;"> </div>
</div>
</div>
</div><br><b>Bio:</b> <p>Charalampos (Babis) Papamanthou is an assistant professor of Electrical and Computer Engineering at the University of Maryland, College Park, where he joined in 2013 after a postdoc at UC Berkeley. At Maryland, he is also affiliated with the Institute for Advanced Computer Studies (UMIACS), where he is a member of the Maryland Cybersecurity Center (MC2). He works on applied cryptography and computer security---and especially on technologies, systems and theory for secure and private cloud computing. While at College Park, he received the NSF CAREER award, the Google Faculty Research Award, the Yahoo! Faculty Research Engagement Award, the NetApp Faculty Fellowship, the 2013 UMD Invention of the Year Award, the 2014 Jimmy Lin Award for Invention and the George Corcoran Award for Excellence in Teaching. His research is currently funded by federal agencies (NSF, NIST and NSA) and by the industry (Google, Yahoo!, NetApp and Amazon). His PhD is in Computer Science from Brown University (2011) and he also holds an MSc in Computer Science from the University of Crete (2005), where he was a member of ICS-FORTH. His work has received over 3,000 citations and he has published in venues and journals spanning theoretical and applied cryptography, systems and database security, graph algorithms and visualization and operations research.</p>
<p>Wei Bai is a Ph.D candidate in the Department of Electrical and Computer Engineering at University of Maryland, College Park. He is supervised by Prof. Michelle Mazurek. His research focuses on human factors in security and privacy, which combines seucrity, privacy and human-computer interaction. He is interested in understanding users’ security or privacy perceptions and behaviors, and designing or improving systems to be secure as well as usable.</p>
<p>Kelsey Fulton is also a Ph.D candidate in the University of Maryland, College Park. She is also being supervised by Prof. Michelle Mazurek.</p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/22242019-01-02T17:15:47-05:002019-01-02T17:15:47-05:00https://talks.cs.umd.edu/talks/2224Combining Asynchronous and Synchronous Byzantine Agreement: The Best of Both WorldsJulian Loss - Bochum University<br><a href="http://www.umd.edu/CampusMaps/bld_detail.cfm?bld_code=AVW">3400 A.V. Williams Building (AVW)</a><br>Monday, January 28, 2019, 4:00-5:00 pm<br><br><b>Abstract:</b> <div style="margin: 0px; font-stretch: normal; line-height: normal;">In the problem of byzantine agreement (BA), n parties wish to agree on a value by jointly running a distributed protocol. The protocol is deemed secure if it achieves this goal in spite of a malicious adversary that corrupts a certain fraction of the parties and can make them behave in arbitrarily malicious ways. Since its first formalization by Lamport et al., the problem of BA has been extensively studied in the literature under many different assumptions. One common way to classify protocols for BA is by their synchrony and network assumptions. For example, some protocols offer resilience against up to f<n/2 corrupted parties by assuming a synchronized, but possibly slow, network, in which parties share a global clock and messages are guaranteed to arrive after a given time. By comparison, other protocols achieve much higher efficiency and work without these assumptions, but can tolerate only f<n/3 corrupted parties. A natural question is whether it is possible to combine protocols from these two regimes to achieve the "best of both worlds": protocols that are both efficient *and* robust. In this work, we answer this question in the affirmative.</div>
<div style="margin: 0px; font-stretch: normal; line-height: normal;"> </div>
<div style="margin: 0px; font-stretch: normal; line-height: normal;">Concretely, we make the following contributions:</div>
<div style="margin: 0px; font-stretch: normal; line-height: normal;">- We give the first generic compilers that combine BA protocols under different network and synchrony assumptions and preserve both their efficiency and robustness. Our constructions are simple and rely solely on a secure signature scheme.</div>
<div style="margin: 0px; font-stretch: normal; line-height: normal;">- We prove that our constructions achieve optimal corruption bounds.</div>
<div style="margin: 0px; font-stretch: normal; line-height: normal;">- Finally, we give the first efficient protocol for asynchronous byzantine agreement (ABA) that tolerates *adaptive* corruptions and matches the communication complexity of the best protocols in the static-corruption case.</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/23552019-05-07T22:03:55-04:002019-05-07T22:03:55-04:00https://talks.cs.umd.edu/talks/2355Differentially Private Nonparametric Hypothesis TestingAdam Groce - Reed College<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">3137 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Monday, May 13, 2019, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Hypothesis testing is the workhorse statistical analysis in medical and social science research. In this talk I will discuss how hypothesis tests can be carried out on sensitive data while protecting privacy. In particular, we consider nonparametric tests, constructing private analogues to the classic Kruskal-Wallis, Mann-Whitney, and Wilcoxon tests. In traditional statistics, these nonparametric tests are less powerful than their parametric alternatives, which work in the special case of normally distributed data. We find that in the private setting our nonparametric tests are actually more powerful than the best known parametric tests, despite their reduced assumptions. This is joint work with Andrew Bray, Simon Couch, Zeki Kazan, and Kaiyan Shi.</p><br><b>Bio:</b> <p>Adam Groce received his PhD from UMD in 2014, where he was supervised by Jonathan Katz. He is currently an assistant professor of computer science at Reed College.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/6">CATS</a> ⋅ <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/23612019-05-14T16:00:55-04:002019-05-14T16:00:55-04:00https://talks.cs.umd.edu/talks/2361Pacer: Network Side Channel Mitigation in the CloudAastha Mehta - MPI<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Friday, May 17, 2019, 11:00-11:59 am<br><br><b>Abstract:</b> <p>An important concern for many Cloud customers is data confidentiality. A particular concern is data leak via side channels, which arise when mutually distrusting tenants contend on resources such as CPUs, caches, memory, and network in the Cloud. In this talk, I will present our system, Pacer, which mitigates side channels arising from shared network links. Pacer shapes the outbound traffic of a Cloud tenant to make it independent of the tenant's secrets. At the same time, Pacer allows variations in the traffic shape that reveal only public (non-secret) aspects of the tenants' workloads, thus enabling efficient sharing of network resources. Implementing Pacer requires modest changes to the Cloud hypervisor and guest OS, and minimal changes to the guest application. Our experimental results show that Pacer can protect the guests' secrets with modest overheads on bandwidth and throughput.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/23652019-05-31T11:04:21-04:002019-05-31T11:04:21-04:00https://talks.cs.umd.edu/talks/2365Rendezvous: Communication with Cryptographically Protected MetadataSaba Eskandarian - Stanford University<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Thursday, June 27, 2019, 11:00-11:59 am<br><br><b>Abstract:</b> <div>
<p>Rendezvous is a communication system that cryptographically protects metadata. Unlike all existing systems for metadata-hiding communication, Rendezvous does not require users to communicate in synchronous messaging rounds: Rendezvous provides meaningful metadata-hiding guarantees even if different users interact with the system at different rates. A Rendezvous deployment consists of a three-server cluster, and the system protects user privacy even if an active attacker controls one of the servers and any number of users.</p>
<p>Every pair of Rendezvous users shares a secret virtual address that points to a unique mailbox stored at the servers. By cryptographically protecting accesses to virtual addresses, the honest servers prevent malicious servers and users from learning which mailbox has been updated when. By applying new cryptographic tools for detecting disruption attacks by malicious clients, Rendezvous reduces the bandwidth cost per message from O(√N) to O(logN) bits in an N-user deployment, which yields 4× and 8× overall performance improvements on the server and client sides, respectively, and reduces communication costs by one or more orders magnitude. Finally, we discuss how Rendezvous might apply in practice to protect communication between journalists and sources.</p>
<p>This is joint work with Henry Corrigan-Gibbs, Matei Zharia, and Dan Boneh.</p>
</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/20">Crypto Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/3">MC2 Seminar</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/24592019-11-06T12:27:16-05:002019-11-06T12:27:16-05:00https://talks.cs.umd.edu/talks/2459Data-Driven Software Maintenance<a href="http://people.cs.vt.edu/nm8247/">Na Meng - Virginia Tech</a><br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">3137 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Monday, November 25, 2019, 11:00 am-12:00 pm<br><br><b>Abstract:</b> <p>Nowadays software is widely used in almost every domain. When software applications contain defects or errors, these errors or software bugs can trigger security problems, cause financial loss, or even jeopardize human health. However, maintaining software to remove all those errors is usually challenging. This is because to resolve a software issue, developers usually spend lots of time and effort in order to comprehend programs, so that they can apply program changes consistently, completely, and correctly. When developers have insufficient domain knowledge or misunderstand the program logic, they may fail to fix the bug or their bug fixes can actually introduce new bugs.</p>
<p>In this talk, I will present our recent research that intends to bridge the gap between program complexity and developers’ programming capabilities. Thus, there are three parts in my talk. For the first part, I will introduce our empirical study on developers’ secure coding practices. By crawling and analyzing developers’ technical discussions on the StackOverflow website, we identified various programming challenges that developers encounter when they build security functionalities. We also showed security vulnerabilities due to developers’ security API misuses. For the second part, I’ll introduce our related empirical study to examine the reliability of security suggestions on StackOverflow, which study reveals a worrisome reality in the software development industry. For the third part, I will present our recent tool that recommends code refactorings for developers. All our empirical studies and techniques have the potential to help developers (1) better understand program complexity and the complexity of software maintenance, and (2) improve program maintenance as well as software quality. </p><br><b>Bio:</b> <p>Dr. Na Meng is an assistant professor in the Department of Computer Science at Virginia Tech, U.S. (since 2015). She received her PhD in Computer Science at The University of Texas at Austin, U.S. (2014). Her research interests include Software Engineering and Programming Languages. She focuses on conducting empirical studies on software bugs and fixes, and investigating new approaches to help developers comprehend programs and changes, to detect and fix bugs, and to modify code automatically. Nowadays, Dr. Meng also explores to fix security bugs automatically. Dr. Meng received the NSF CAREER Award in 2019. </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/1">PL Reading Group</a> ⋅ <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/25242020-02-10T22:35:37-05:002020-02-10T22:35:37-05:00https://talks.cs.umd.edu/talks/2524Network-Agnostic State Machine ReplicationErica Blum<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 11, 2020, 12:00-1:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small;">We study the problem of state machine replication (SMR) -- the underlying problem addressed by blockchain protocols -- in the presence of a malicious adversary who can corrupt some fraction of the parties running the protocol. Existing protocols for this task assume either a synchronous network (where all messages are delivered within some known time delta) or an asynchronous network (where messages can be delayed arbitrarily). Although protocols for the latter case give seemingly stronger guarantees, in fact they are incomparable since they (inherently) tolerate a lower fraction of corrupted parties. </span><span style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small;">We design an SMR protocol that is network-agnostic in the following sense: if it is run in a synchronous network, it tolerates t_s corrupted parties; if the network happens to be asynchronous it is resilient to t_a <= t_s faults. Our protocol achieves optimal tradeoffs between t_s and t_a.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/25312020-02-17T14:16:25-05:002020-02-17T14:35:43-05:00https://talks.cs.umd.edu/talks/2531Prio: Private, Robust, and Scalable Computation of Aggregate StatisticsNo speaker yet<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 18, 2020, 11:30 am-12:30 pm<br><br><b>Abstract:</b> <p><a href="https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/corrigan-gibbs">https://www.usenix.org/conference/nsdi17/technical-sessions/presentation/corrigan-gibbs</a></p>
<div class="field field-name-field-paper-description field-type-text-long field-label-above" style="padding: 0.5em 0px; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">
<div class="field-items" style="margin: 0px auto; padding-left: 10px; padding-right: 10px; max-width: 1200px;">
<div class="field-item odd">
<p style="margin: 0px 0px 0.7em;">"This paper presents Prio, a privacy-preserving system for the collection of aggregate statistics. Each Prio client holds a private data value (e.g., its current location), and a small set of servers compute statistical functions over the values of all clients (e.g., the most popular location). As long as at least one server is honest, the Prio servers learn nearly nothing about the clients’ private data, except what they can infer from the aggregate statistics that the system computes. To protect functionality in the face of faulty or malicious clients, Prio uses <em>secret-shared non-interactive proofs</em> (SNIPs), a new cryptographic technique that yields a hundred-fold performance improvement over conventional zero-knowledge approaches. Prio extends classic private aggregation techniques to enable the collection of a large class of useful statistics. For example, Prio can perform a least-squares regression on high-dimensional client-provided data without ever seeing the data in the clear."</p>
</div>
</div>
</div>
<div class="field field-name-field-open-access-sponsor field-type-blockreference field-label-hidden" style="clear: both; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">
<div class="field-items">
<div class="field-item odd">
<section class="block block-block 130">
<div class="block-content">
<div style="width: 1200px; text-align: center; padding: 1em 0px;"> </div>
</div>
</section>
</div>
</div>
</div>
<div class="field field-name-field-paper-people field-type-node-reference field-label-hidden" style="color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;">
<div class="field-items" style="padding-left: 0px;">
<div class="field-item odd">
<article id="node-201306" class="node node-speaker view-mode-schedule" style="padding: 0px; position: relative;">
<div class="content" style="margin: 0px auto; padding-left: 10px; padding-right: 10px; max-width: 1200px; clear: both;"> </div>
</article>
</div>
<div class="field-item even">
<article id="node-201307" class="node node-speaker view-mode-schedule" style="padding: 0px; position: relative;">
<div class="content" style="margin: 0px auto; padding-left: 10px; padding-right: 10px; max-width: 1200px; clear: both;"> </div>
</article>
</div>
</div>
</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/25402020-02-23T20:37:58-05:002020-02-23T20:38:15-05:00https://talks.cs.umd.edu/talks/2540Execution Environments for Achieving Keyless CDNsStephen Herwig<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 25, 2020, 12:00-1:00 pm<br><br><b>Abstract:</b> <p><span style="color: #1d1c1d; font-family: Slack-Lato, appleLogo, sans-serif; font-size: 15px; font-variant-ligatures: common-ligatures; white-space: pre-wrap;">Content Delivery Networks (CDNs) serve a large and increasing portion of today's web content. Beyond caching, CDNs provide their customers with a variety of services, including protection against DDoS and targeted attacks. As the web shifts from HTTP to HTTPS, CDNs continue to provide such services by also assuming control of their customers' private keys, thereby breaking a fundamental security principle: private keys must only be known by their owner</span><span style="color: #1d1c1d; font-family: Slack-Lato, appleLogo, sans-serif; font-size: 15px; font-variant-ligatures: common-ligatures; white-space: pre-wrap;">.</span></p>
<p><span style="color: #1d1c1d; font-family: Slack-Lato, appleLogo, sans-serif; font-size: 15px; font-variant-ligatures: common-ligatures; white-space: pre-wrap;">In this talk, I present two approaches to running unmodified, legacy CDN services without the CDN having access to the customers' private keys. My first approach, conclaves, uses Intel SGX secure hardware to run the CDN software in a trusted execution environment. In its strongest configuration, conclaves reduces the knowledge of the edge server to that of a traditional on-path HTTPS adversary. My second approach, co-domains, uses a taint-tracking emulator to migrate a CDN process to a customer's trusted machine for operations involving the private key. Both conclaves and co-domains are specific examples of using virtualization techniques to transparently add security guarantees post hoc to unmodified binaries.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/25412020-02-23T20:59:27-05:002020-02-23T20:59:27-05:00https://talks.cs.umd.edu/talks/2541Fuzzing fast with vectorized emulationBrandon Falk<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Monday, February 24, 2020, 12:00-1:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small;">There's a lot of effort that goes into making good fuzzers. Developing better mutators and generators, collecting larger corpuses, and of course, manual auditing for fuzz-worthy paths. But, what if instead of doing all the fuzz related things, you just put all of your efforts into making fuzzing as fast as possible? I've done that work for you! I've worked on many high performance fuzzers, ranging from custom hypervisors, to modifying QEMU, and finally to vectorized emulation. Vectorized emulation leverages the AVX-512 instruction set to run 8 (or 16) VMs at a time in parallel. While initially the only goal was high-performance emulation, it turns out that the information that can be collected by diffing multiple VMs while they are executing can result in "solver-like" behavior for simple cases. This technique gives immediate and low-cost feedback on what aspects of execution were affected by the mutations that were performed in the input.</span></p>
<p><span style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small;">I'll talk about the performance ramifications of vectorized emulation, the hardening techniques used to apply ASAN-style protections of binary targets, data that can be extracted from targets, and some examples of what all of this together can do!</span></p><br><b>Bio:</b> <p><span style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small;">Brandon is a security researcher with over 8 years of professional experience. He has specialized in fuzzing and harnessing, leading him to write various hypervisors and emulators to assist in fuzzing. By putting an emphasis on scalability and performance, he continuously produces some of the fastest fuzzers around. These tools are often written to gather more than standard code coverage, while still maintaining the ability to work on targets without source or even a system to run them on. Brandon recently has been spending effort on looking into how Intel CPUs work internally, attempting to document undocumented behavior with the end goal of looking for Meltdown style attacks.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/25522020-03-02T11:45:58-05:002020-03-02T11:45:58-05:00https://talks.cs.umd.edu/talks/2552Secrecy, Flagging, and Paranoia Revisited: User Attitudes Toward Encrypted Messaging Apps Omer Akgul<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 3, 2020, 12:00-1:00 pm<br><br><b>Abstract:</b> <p><span style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small;">With the popularity of tools like WhatsApp, end-to-end encryption (E2EE) is more widely available than ever before. Nonetheless, user perceptions lag behind. Users often do not understand E2EE's security properties or believe them sufficient. Thus, even users with access to E2EE tools turn to less-secure alternatives for sending confidential information. To better understand these issues, we conducted a 357-participant online user study analyzing how explanations of encryption impact user perceptions. We showed participants an app-store-style description of a messaging tool, varying the terminology used, whether encryption was on by default, and the prominence of encryption. We collected perceptions of the tool's security guarantees, appropriateness for privacy-focused use by whom and for what purpose, and perceptions of paranoia. Compared to "secure", describing the tool as "encrypted" or "military-grade encrypted" increased perceptions it was appropriate for privacy-sensitive tasks, whereas describing it more precisely as "end-to-end encrypted" did not. Prior work had found an association between the use of encryption and being perceived as paranoid. We found this link minimized, but still partially applicable. Nonetheless, participants perceived encrypted tools as appropriate for general tasks.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/25592020-03-09T17:13:12-04:002020-03-09T17:13:12-04:00https://talks.cs.umd.edu/talks/2559A large scale investigation of obfuscation use in Google PlayYasemin Acar<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 10, 2020, 12:00-1:00 pm<br><br><b>Abstract:</b> <div style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;"> </div>
<div style="color: #222222; font-family: Arial, Helvetica, sans-serif; font-size: small; font-style: normal; font-variant-ligatures: normal; font-variant-caps: normal; font-weight: 400; letter-spacing: normal; orphans: 2; text-align: start; text-indent: 0px; text-transform: none; white-space: normal; widows: 2; word-spacing: 0px; background-color: #ffffff; text-decoration-style: initial; text-decoration-color: initial;"><span style="color: #333333; font-variant-ligatures: normal;">Android applications are frequently plagiarized or repackaged, and software obfuscation is a recommended protection against these practices. However, there used to be very little data on the overall rates of app obfuscation, the techniques used, or factors that lead to developers to choose to obfuscate their apps. In our 2018 paper, we presented the first comprehensive analysis of the use of and challenges to software obfuscation in Android applications. We analyzed 1.7 million free Android apps from Google Play to detect various obfuscation techniques, finding that only 24.92% of apps are obfuscated by the developer. To better understand this rate of obfuscation, we surveyed 308 Google Play developers about their experiences and attitudes about obfuscation. We found that while developers feel that apps in general are at risk of plagiarism, they do not fear theft of their own apps. Developers also report difficulties obfuscating their own apps. To better understand, we conducted a follow-up study where the vast majority of 70 participants failed to obfuscate a realistic sample app even while many mistakenly believed they had been successful. These findings have broad implications both for improving the security of Android apps and for all tools that aim to help developers write more secure software. They also reflect that any first line of defense needs to be sufficiently usable and well-understood for the respective actors to invest time and effort in widespread deployment.</span></div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/26482020-09-21T10:26:32-04:002020-09-21T10:26:32-04:00https://talks.cs.umd.edu/talks/2648Folk Models of Home Computer SecurityNoel Warford<br><br>Monday, September 21, 2020, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Home computer systems are insecure because they are administered by untrained users. The rise of botnets has amplified this problem; attackers compromise these computers, aggregate them, and use the resulting network to attack third parties. Despite a large security industry that provides software and advice, home computer users remain vulnerable. I identify eight ‘folk models’ of security threats that are used by home computer users to decide what security software to use, and which expert security advice to follow: four conceptualizations of ‘viruses’ and other malware, and four conceptualizations of ‘hackers’ that break into computers. I illustrate how these models are used to justify ignoring expert security advice. Finally, I describe one reason why botnets are so difficult to eliminate: they cleverly take advantage of gaps in these models so that many home computer users do not take steps to protect against them.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/26652020-10-01T15:02:52-04:002020-10-01T15:02:52-04:00https://talks.cs.umd.edu/talks/2665Evaluating In-Workflow Messages for Improving Mental Models of End-to-End EncryptionOmer Akgul<br><br>Monday, October 5, 2020, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Research has repeatedly established that although many messaging apps (WhatsApp, iMessage, Signal etc.) have incorporated end-to-end encryption (E2EE) as a feature, user understandings of E2EE communications are not completely accurate. As a result, some users may turn to less secure platforms (e.g., SMS or landline calls) to exchange confidential information, may not know how to react to some E2EE related tasks, such as performing authentication ceremonies. These misunderstandings can cause users greater security and privacy risks than they realize. Our work aims to tackle this issue by creating and utilizing practical explanations of E2EE to improve the functionality of users’ mental models.</p>
<p>We developed our educational efforts through a series of user studies. First, we conducted a participatory-design tutorial study (n=25) to understand what information about E2EE is most useful to and will likely be absorbed by end users. Based on the results, we generated short, medium, and long-length educational texts and measured their effectiveness in isolation with an online survey study (n=459). Finally, we evaluated the messages in context with a longitudinal study (n=61). We incorporated the best-performing messages into an exemplar open-source messaging app (based on Signal), and asked participants to interact with it for three weeks.</p>
<p>In this talk, we will discuss our design approach and the results of our intervention on users’ mental models. We will share the implications of our work for the UX design of privacy-preserving communications tools.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/26862020-10-14T12:54:23-04:002020-10-19T13:39:30-04:00https://talks.cs.umd.edu/talks/2686Privacy Pass: Bypassing Internet Challenges AnonymouslyMichael Rosenberg<br>https://umd.zoom.us/j/99459651889<br>Monday, October 19, 2020, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Privacy Pass is a protocol which helps Tor users avoid having to do so many CAPTCHAs (a thing that happens frequently if you’re on Tor), while still retaining anonymity. The cryptographic cornerstone of this construction is a Verifiable Oblivious Pseudorandom Function. We’re gonna break that down and talk about what this scheme can and cannot do this Monday.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/26952020-10-25T13:30:28-04:002020-10-26T10:40:25-04:00https://talks.cs.umd.edu/talks/2695Automating Censorship EvasionKevin Bock<br>https://umd.zoom.us/j/99459651889?pwd=S2pmRWlxRzIxaFYzQUdqTjdzeng5Zz09<br>Monday, October 26, 2020, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Researchers and censoring regimes have long engaged in a cat-and-mouse game, leading to increasingly sophisticated Internet-scale censorship techniques and methods to evade them. Recently, we developed Geneva, a novel genetic algorithm that evolves packet-manipulation-based censorship evasion strategies against nation-state level censors. By automating the censorship evasion process, Geneva has opened the door for researchers to respond quickly to new censorship events. In this talk, I will discuss the challenges and what we’ve learned this past year using Geneva for rapid censorship response against new censorship systems in Iran, China, and India. I will conclude this talk by discussing ongoing work with Geneva, and new forms of censorship evasion.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/27012020-11-01T21:35:57-05:002020-11-01T21:35:57-05:00https://talks.cs.umd.edu/talks/2701The Effect of Entertainment Media on Mental Models of Computer SecurityKelsey Fulton<br>https://umd.zoom.us/j/99459651889?pwd=S2pmRWlxRzIxaFYzQUdqTjdzeng5Zz09<br>Monday, November 2, 2020, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>When people inevitably need to make decisions about their computer-security posture, they rely on their mental models of threats and potential targets. Research has demonstrated that these mental models, which are often incomplete or incorrect, are informed in part by fictional portrayals in television and film. Inspired by prior research in public health demonstrating that efforts to ensure accuracy in the portrayal of medical situations has had an overall positive effect on public medical knowledge, we explore the relationship between computer security and fictional television and film. We report on a semi-structured interview study (n=19) investigating what users have learned about computer security from mass media and how they evaluate what is and is not realistic within fictional portrayals. In addition to confirming prior findings that television and film shape users’ mental models of security, we identify specific misconceptions that appear to align directly with common fictional tropes. We identify specific proxies that people use to evaluate realism and examine how they influence these misconceptions. We conclude with recommendations for security researchers as well as creators of fictional media when considering how to improve people’s understanding of computer-security concepts and behaviors.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/27052020-11-04T12:40:46-05:002020-11-04T12:40:46-05:00https://talks.cs.umd.edu/talks/2705Probabilistically Almost-Oblivious Computation<a href="https://www.impredicative.org/">Ian Sweet - University of Maryland - College Park</a><br>https://umd.zoom.us/j/99459651889?pwd=S2pmRWlxRzIxaFYzQUdqTjdzeng5Zz09<br>Monday, November 9, 2020, 2:00-3:00 pm<br><br><b>Abstract:</b> <p><span style="font-family: Georgia;">Memory-trace Obliviousness (MTO) is a noninterference property: programs that enjoy it have neither explicit nor implicit information leaks, even when the adversary can observe the program counter and the address trace of memory accesses. Probabilistic MTO relaxes MTO to accept probabilistic programs. In prior work, we developed λobliv, whose type system aims to enforce PMTO. We showed that λobliv could typecheck (recursive) Tree ORAM, a sophisticated algorithm that implements a probabilistically oblivious key-value store. We conjectured that λobliv ought to be able to typecheck more optimized oblivious data structures (ODSs), but that its type system was as yet too weak.</span><br style="font-family: Georgia;"><br style="font-family: Georgia;"><span style="font-family: Georgia;">In this talk we show we were wrong: ODSs cannot be implemented in λobliv because they are not actually PMTO, due to the possibility of overflow, which occurs when an ORAM write silently fails due to a local lack of space. This was surprising to us because Tree ORAM can also overflow but is still PMTO. The paper explains what is going on and sketches the task of adapting the PMTO property, and λobliv’s type system, to characterize ODS security</span></p><br><b>Bio:</b> <p><span style="font-family: Georgia;">Ian Sweet is a PhD student in the Computer Science Department at the University of Maryland, College Park, advised by Dr. Mike Hicks. He is a member of the Programming Languages at University of Maryland (PLUM) group. His research focuses on the design, implementation, and verification of secure programming languages.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/27132020-11-16T10:10:53-05:002020-11-16T10:10:53-05:00https://talks.cs.umd.edu/talks/2713The Honey Badger of BFT ProtocolsErica Blum<br>https://umd.zoom.us/j/99459651889?pwd=S2pmRWlxRzIxaFYzQUdqTjdzeng5Zz09<br>Monday, November 16, 2020, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Distributed fault-tolerant protocols offer correctness guarantees even when some parties are faulty. In this talk, we'll look at a Byzantine fault-tolerant protocol for atomic broadcast (HoneyBadgerBFT, Miller et al) and discuss how the state of the art overall has advanced in recent years.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/28932021-08-31T12:01:28-04:002021-08-31T12:01:28-04:00https://talks.cs.umd.edu/talks/2893Welcome BackN/A<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, September 7, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>For this first meeting, we'll get together and introduce ourselves so that we can get (re)acquainted. </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29232021-09-20T20:43:29-04:002021-09-20T20:43:29-04:00https://talks.cs.umd.edu/talks/2923Identifying Harmful Media in End-to-End Encrypted Communication: Efficient Private Membership ComputationMichael Rosenberg<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, September 21, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">End-to-end encryption (E2EE) poses a challenge for automated detection of harmful media, such as child sexual abuse material and extremist content. The predominant approach at present, perceptual hash matching, is not viable because in E2EE a communications service cannot access user content.</p>
<p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">In this work, we explore the technical feasibility of privacy-preserving perceptual hash matching for E2EE services. We begin by formalizing the problem space and identifying fundamental limitations for protocols. Next, we evaluate the predictive performance of common perceptual hash functions to understand privacy risks to E2EE users and contextualize errors associated with the protocols we design.</p>
<p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">Our primary contribution is a set of constructions for privacy-preserving perceptual hash matching. We design and evaluate client-side constructions for scenarios where disclosing the set of harmful hashes is acceptable. We then design and evaluate interactive protocols that optionally protect the hash set and do not disclose matches to users. The constructions that we propose are practical for deployment on mobile devices and introduce a limited additional risk of false negatives.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29242021-09-20T20:45:45-04:002021-09-20T20:45:45-04:00https://talks.cs.umd.edu/talks/2924The Spyware Used in Intimate Partner ViolenceNoel Warford<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, September 28, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Survivors of intimate partner violence increasingly report that abusers install spyware on devices to track their location, monitor communications, and cause emotional and physical harm. To date there has been only cursory investigation into the spyware used in such intimate partner surveillance (IPS). We provide the first in-depth study of the IPS spyware ecosystem. We design, implement, and evaluate a measurement pipeline that combines web and app store crawling with machine learning to find and label apps that are potentially dangerous in IPS contexts. Ultimately we identify several hundred such IPS-relevant apps.</p>
<p>While we find dozens of overt spyware tools, the majority are “dual-use” apps — they have a legitimate purpose (e.g., child safety or anti-theft), but are easily and effectively repurposed for spying on a partner. We document that a wealth of online resources are available to educate abusers about exploiting apps for IPS. We also show how some dual-use app developers are encouraging their use in IPS via advertisements, blogs, and customer support services. We analyze existing anti-virus and anti-spyware tools, which universally fail to identify dual-use apps as a threat.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29252021-09-21T08:34:00-04:002021-09-21T08:34:00-04:00https://talks.cs.umd.edu/talks/2925Muse: Secure Inference Resilient to Malicious ClientsAndreea Alexandru<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 5, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">The increasing adoption of machine learning inference in applications has led to a corresponding increase in concerns about the privacy guarantees offered by existing mechanisms for inference. Such concerns have motivated the construction of efficient <em>secure inference</em> protocols that allow parties to perform inference without revealing their sensitive information. Recently, there has been a proliferation of such proposals, rapidly improving efficiency. However, most of these protocols assume that the client is semi-honest, that is, the client does not deviate from the protocol; yet in practice, clients are many, have varying incentives, and can behave arbitrarily.</p>
<p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">To demonstrate that a malicious client can completely break the security of semi-honest protocols, we first develop a new <em>model-extraction attack</em> against many state-of-the-art secure inference protocols. Our attack enables a malicious client to learn model weights with 22x--312x fewer queries than the best black-box model-extraction attack and scales to much deeper networks.</p>
<p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">Motivated by the severity of our attack, we design and implement MUSE, an efficient two-party secure inference protocol resilient to <em>malicious clients</em>. MUSE introduces a novel cryptographic protocol for <em>conditional disclosure of secrets</em> to switch between authenticated additive secret shares and garbled circuit labels, and an improved <em>Beaver's triple generation</em> procedure which is 8x--12.5x faster than existing techniques.</p>
<p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">These protocols allow MUSE to push a majority of its cryptographic overhead into a preprocessing phase: compared to the equivalent <em>semi-honest</em> protocol (which is close to state-of-the-art), MUSE's online phase is only 1.7x--2.2x slower and uses 1.4x more communication. Overall, MUSE is 13.4x--21x faster and uses 2x--3.6x less communication than existing secure inference protocols which defend against malicious clients.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29262021-09-21T08:36:31-04:002021-09-21T08:36:31-04:00https://talks.cs.umd.edu/talks/2926USENIX deadline chill seshN/A<br><br>Tuesday, October 12, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Relax and take a break from your paper that you're sending to USENIX!</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29272021-09-21T08:38:19-04:002021-09-21T08:38:19-04:00https://talks.cs.umd.edu/talks/2927Hopper: Modeling and Detecting Lateral MovementKaitlyn Devalk<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 19, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p><span style="color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">In successful enterprise attacks, adversaries often need to gain access to additional machines beyond their initial point of compromise, a set of internal movements known as lateral movement. We present Hopper, a system for detecting lateral movement based on commonly available enterprise logs. Hopper constructs a graph of login activity among internal machines and then identifies suspicious sequences of logins that correspond to lateral movement. To understand the larger context of each login, Hopper employs an inference algorithm to identify the broader path(s) of movement that each login belongs to and the causal user responsible for performing the logins. Hopper then leverages this path inference algorithm, in conjunction with a set of detection rules and a new anomaly scoring algorithm, to surface the login paths most likely to reflect lateral movement. On a 15-month enterprise dataset consisting of over 780 million internal logins, Hopper achieves a 94.5% detection rate across over 300 realistic attack scenarios, including one red team attack, while generating an average of <9 alerts per day. In contrast, to detect the same number of attacks, prior state-of-the-art systems would need to generate nearly 8x as many false positives.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29282021-09-21T08:39:18-04:002021-09-21T08:39:18-04:00https://talks.cs.umd.edu/talks/2928Poisoning the Unlabeled Dataset of Semi-Supervised LearningYigitcan (John) Kaya<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 26, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">Semi-supervised machine learning models learn from a (small) set of labeled training examples, and a (large) set of unlabeled training examples. State-of-the-art models can reach within a few percentage points of fully-supervised training, while requiring 100x less labeled data.</p>
<p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">We study a new class of vulnerabilities: poisoning attacks that modify the unlabeled dataset. In order to be useful, un-labeled datasets are given strictly less review than labeled datasets, and adversaries can therefore poison them easily. By inserting maliciously-crafted unlabeled examples totaling just 0.1% of the dataset size, we can manipulate a model trained on this poisoned dataset to misclassify arbitrary examples at test time (as any desired label). Our attacks are highly effective across datasets and semi-supervised learning methods.</p>
<p style="margin: 0px 0px 0.7em; color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">We find that more accurate methods (thus more likely to be used) are significantly more vulnerable to poisoning attacks, and as such better training methods are unlikely to prevent this attack. To counter this we explore the space of defenses, and propose two methods that mitigate our attack.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29872021-11-09T09:49:29-05:002021-11-09T09:49:29-05:00https://talks.cs.umd.edu/talks/2987Compositional Security for Reentrant ApplicationsEthan Cecchetti<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 9, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>The disastrous vulnerabilities in smart contracts sharply remind us of our ignorance: we do not know how to write code that is secure in composition with malicious code. Information flow control has long been proposed as a way to achieve compositional security, offering strong guarantees even when combining software from different trust domains. Unfortunately, this appealing story breaks down in the presence of reentrancy attacks. We formalize a general definition of reentrancy and introduce a security condition that allows software modules like smart contracts to protect their key invariants while retaining the expressive power of safe forms of reentrancy. We present a security type system that provably enforces secure information flow; in conjunction with run-time mechanisms, it enforces secure reentrancy even in the presence of unknown code; and it helps locate and correct recent high-profile vulnerabilities.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29882021-11-09T09:51:14-05:002021-11-09T09:51:14-05:00https://talks.cs.umd.edu/talks/2988Weaponising Middleboxes for TCP Reflected AmplificationKevin Bock<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 30, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Reflective amplification attacks are a powerful tool in the arsenal of a DDoS attacker, but to date have almost exclusively targeted UDP-based protocols. In this paper, we demonstrate that non-trivial TCP-based amplification is possible and can be orders of magnitude more effective than well-known UDP-based amplification. By taking advantage of TCP-noncompliance in network middleboxes, we show that attackers can induce middleboxes to respond and amplify network traffic. With the novel application of a recent genetic algorithm, we discover and maximize the efficacy of new TCP-based reflective amplification attacks, and present several packet sequences that cause network middleboxes to respond with substantially more packets than we send.</p>
<p>We scanned the entire IPv4 Internet to measure how many IP addresses permit reflected amplification. We find hundreds of thousands of IP addresses that offer amplification factors greater than 100×. Through our Internet-wide measurements, we explore several open questions regarding DoS attacks, including the root cause of so-called “mega amplifiers”. We also report on network phenomena that causes some of the TCP-based attacks to be so effective as to technically have infinite amplification factor (after the attacker sends a constant number of bytes, the reflector generates traffic indefinitely). We have made our code publicly available.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/29982021-11-16T07:28:53-05:002021-11-16T07:29:08-05:00https://talks.cs.umd.edu/talks/2998Hey Alexa, is this Skill Safe?: Taking a Closer Look at the Alexa Skill EcosystemWentao Guo<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 16, 2021, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Amazon's voice-based assistant, Alexa, enables users to directly interact with various web services through natural language dialogues. It provides developers with the option to create third-party applications (known as Skills) to run on top of Alexa. While such applications ease users' interaction with smart devices and bolster a number of additional services, they also raise security and privacy concerns due to the personal setting they operate in. This paper aims to perform a systematic analysis of the Alexa skill ecosystem. We perform the first large-scale analysis of Alexa skills, obtained from seven different skill stores totaling to 90,194 unique skills. Our analysis reveals several limitations that exist in the current skill vetting process. We show that not only can a malicious user publish a skill under any arbitrary developer/company name, but she can also make backend code changes after approval to coax users into revealing unwanted information. We, next, formalize the different skill-squatting techniques and evaluate the efficacy of such techniques. We find that while certain approaches are more favorable than others, there is no substantial abuse of skill squatting in the real world. Lastly, we study the prevalence of privacy policies across different categories of skill, and more importantly the policy content of skills that use the Alexa permission model to access sensitive user data. We find that around 23.3% of such skills do not fully disclose the data types associated with the permissions requested. We conclude by providing some suggestions for strengthening the overall ecosystem, and thereby enhance transparency for end-users.</p>
<p> </p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/30442022-01-24T15:38:51-05:002022-01-24T15:38:51-05:00https://talks.cs.umd.edu/talks/3044Squashing Bugs and Empowering Programmers with User-Centered Programming Language Design.(No abstract yet)<br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/30492022-01-31T15:46:51-05:002022-01-31T15:46:51-05:00https://talks.cs.umd.edu/talks/3049SNARKBlock: Federated Anonymous Blocklisting from Hidden Common Input Aggregate ProofsMichael Rosenberg<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 1, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Zero-knowledge blocklists allow cross-platform blocking of users but, counter-intuitively, do not link users identities inter- or intra-platform, or to the fact they were blocked. Unfortunately, existing approaches (Tsang et al. ’10) require that servers do work linear in the size of the blocklist for each verification of a non-membership proof. We design and implement SNARKBLOCK, a new protocol for zero-knowledge blocklisting with server-side verification that is logarithmic in the size of the blocklist. SNARKBLOCK is also the first approach to support ad-hoc, federated blocklisting: websites can mix and match their own blocklists from other blocklists and dynamically choose which identity providers they trust. Our core technical advance, of separate interest, is the HICIAP zero-knowledge proof system, which addresses a common problem in privacy-preserving protocols: using zero-knowledge proofs for repeated but unlinakble interactions. Rerandomzing a Groth16 proof achieves unlinkability without the need to recompute the proof for every interaction. But this technique does not apply to applications where each interaction includes multiple Groth16 proofs over a common hidden input (e.g., the user’s identity). Here, the best known approach is to commit to the hidden input and feed it to each proof, but this creates a persistent identifier, forcing recomputation. HICIAP resolves this problem by aggregating n Groth16 proofs into one O(logn)-sized, O(logn)-verification time proof which also shows that the input proofs share a hidden input. Because HICIAP is zero-knowledge, repeated shows of the same aggregate or an updated aggregate are unlinkable even though the underlying Groth16 proofs are never recomputed.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/30602022-02-08T13:00:05-05:002022-02-08T13:00:05-05:00https://talks.cs.umd.edu/talks/3060Data Poisoning Attacks to Local Differential Privacy ProtocolsTom Hanson - UMD<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 8, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <div class="field-items">
<div class="field-item odd">
<p>Local Differential Privacy (LDP) protocols enable an untrusted data collector to perform privacy-preserving data analytics. In particular, each user locally perturbs its data to preserve privacy before sending it to the data collector, who aggregates the perturbed data to obtain statistics of interest. In the past several years, researchers from multiple communities—such as security, database, and theoretical computer science—have proposed many LDP protocols. These studies mainly focused on improving the utility of the LDP protocols. However, the security of LDP protocols is largely unexplored.</p>
<p>In this work, we aim to bridge this gap. We focus on LDP protocols for <em>frequency estimation</em> and <em>heavy hitter identification</em>, which are two basic data analytics tasks. Specifically, we show that an attacker can inject fake users into an LDP protocol and the fake users send carefully crafted data to the data collector such that the LDP protocol estimates high frequencies for arbitrary attacker-chosen items or identifies them as heavy hitters. We call our attacks <em>data poisoning attacks</em>. We theoretically and/or empirically show the effectiveness of our attacks. We also explore three countermeasures against our attacks. Our experimental results show that they can effectively defend against our attacks in some scenarios but have limited effectiveness in others, highlighting the needs for new defenses against our attacks.</p>
</div>
</div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/30642022-02-10T14:14:53-05:002022-02-10T14:14:53-05:00https://talks.cs.umd.edu/talks/3064Who's Calling? Characterizing Robocalls through Audio and Metadata AnalysisOmer Akgul<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 15, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Unsolicited calls are one of the most prominent security issues facing individuals today. Despite wide-spread anecdotal discussion of the problem, many important questions remain unanswered. In this paper, we present the first large-scale, longitudinal analysis of unsolicited calls to a honeypot of up to 66,606 lines over 11 months. From call metadata we characterize the long-term trends of unsolicited calls, develop the first techniques to measure voicemail spam, wangiri attacks, and identify unexplained high-volume call incidences. Additionally, we mechanically answer a subset of the call attempts we receive to cluster related calls into operational campaigns, allowing us to characterize how these campaigns use telephone numbers. Critically, we find no evidence that answering unsolicited calls increases the amount of unsolicited calls received, overturning popular wisdom. We also find that we can reliably isolate individual call campaigns, in the process revealing the extent of two distinct Social Security scams while empirically demonstrating the majority of campaigns rarely reuse phone numbers. These analyses comprise powerful new tools and perspectives for researchers, investigators, and a beleaguered public.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/30652022-02-10T14:16:15-05:002022-02-21T15:23:46-05:00https://talks.cs.umd.edu/talks/3065Reimagining Secret Sharing: Creating a Safer and More Versatile Primitive by Adding Authenticity, Correcting Errors, and Reducing Randomness RequirementsErica Blum<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 8, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>Aiming to strengthen classical secret-sharing to make it a more directly useful primitive for human end-users, we develop definitions, theorems, and efficient constructions for what we call adept secret-sharing. Our primary concerns are the properties we call privacy, authenticity, and error correction. Privacy strengthens the classical requirement by ensuring maximal confidentiality even if the dealer does not employ fresh, uniformly random coins with each sharing. That might happen either intentionally— to enable reproducible secret-sharing—or unintentionally, when an entropy source fails. Authenticity is a shareholder’s guarantee that a secret recovered using his or her share will coincide with the value the dealer committed to at the time the secret was shared. Error correction is the guarantee that recovery of a secret will succeed, also identifying the valid shares, exactly when there is a unique explanation as to which shares implicate what secret. These concerns arise organically from a desire to create general-purpose libraries and apps for secret sharing that can withstand both strong adversaries and routine operational errors.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/30662022-02-10T14:17:42-05:002022-02-10T14:17:42-05:00https://talks.cs.umd.edu/talks/3066Watching the Watchers: Practical Video Identification Attack in LTE NetworksPhoebe Moh<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 1, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <p><span style="color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">A video identification attack is a tangible privacy threat that can reveal videos that victims are watching. In this paper, we present the first study of a video identification attack in Long Term Evolution (LTE) networks. We discovered that, by leveraging broadcast radio signals, an unprivileged adversary equipped with a software-defined radio can 1) identify mobile users who are watching target videos of the adversary's interest and then 2) infer the video title that each of these users is watching. Using 46,810 LTE traces of three video streaming services from three cellular operators, we demonstrate that our attack achieves an accuracy of up to 0.985. We emphasize that this high level of accuracy stems from overcoming the unique challenges related to the operational logic of LTE networks and video streaming systems. Finally, we present an end-to-end attack scenario leveraging the presented video identification attack and propose countermeasures that are readily applicable to current LTE networks.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/30802022-02-21T15:24:51-05:002022-02-21T15:24:51-05:00https://talks.cs.umd.edu/talks/3080Automating the Discovery of Censorship Evasion Strategies(No abstract yet)<br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/31002022-03-10T11:28:06-05:002022-03-10T11:28:06-05:00https://talks.cs.umd.edu/talks/3100Defensive Technology Use by Political Activists During the Sudanese Revolution by Daffalla et al(No abstract yet)<br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/31632022-04-18T14:02:53-04:002022-04-18T14:02:53-04:00https://talks.cs.umd.edu/talks/3163Protecting Cryptography Against Compelled Self-Incrimination by Scheffler et al.Nathan Reitenger<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, April 19, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <p>The information security community has devoted substantial effort to the design, development, and universal deployment of strong encryption schemes that withstand search and seizure by computationally-powerful nation-state adversaries. In response, governments are increasingly turning to a different tactic: issuing subpoenas that compel people to decrypt devices themselves, under the penalty of contempt of court if they do not comply. Compelled decryption subpoenas sidestep questions around government search powers that have dominated the Crypto Wars and instead touch upon a different (and still unsettled) area of the law: how encryption relates to a person’s right to silence and against selfincrimination. In this work, we provide a rigorous, composable definition of a critical piece of the law that determines whether cryptosystems are vulnerable to government compelled disclosure in the United States. We justify our definition by showing that it is consistent with prior court cases. We prove that decryption is often not compellable by the government under our definition. Conversely, we show that many techniques that bolster security overall can leave one more vulnerable to compelled disclosure. As a result, we initiate the study of protecting cryptographic protocols against the threat of future compelled disclosure. We find that secure multi-party computation is particularly vulnerable to this threat, and we design and implement new schemes that are provably resilient in the face of government compelled disclosure. We believe this work should influence the design of future cryptographic primitives and contribute toward the legal debates over the constitutionality of compelled decryption.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/31742022-04-25T12:14:07-04:002022-04-25T12:14:07-04:00https://talks.cs.umd.edu/talks/3174Are You Really Muted?: A Privacy Analysis of Mute Buttons in Video Conferencing AppsKassem Fawaz - University of Wisconsin<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, April 26, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <p><span style="color: #333333; font-family: Merriweather, serif; font-size: 16px;">In the post-pandemic era, video conferencing apps (VCAs) have converted previously private spaces — bedrooms, living rooms, and kitchens — into semi-public extensions of the office. And for the most part, users have accepted these apps in their personal space, without much thought about the permission models that govern the use of their personal data during meetings. While access to a device’s video camera is carefully controlled, little has been done to ensure the same level of privacy for accessing the microphone. In this work, we ask the question: what happens to the microphone data when a user clicks the mute button in a VCA? We first conduct a user study to analyze users' understanding of the permission model of the mute button. Then, using runtime binary analysis tools, we trace raw audio in many popular VCAs as it traverses the app from the audio driver to the network. We find fragmented policies for dealing with microphone data among VCAs — some continuously monitor the microphone input during mute, and others do so periodically. One app transmits statistics of the audio to its telemetry servers while the app is muted. Using network traffic that we intercept en route to the telemetry server, we implement a proof-of-concept background activity classifier and demonstrate the feasibility of inferring the ongoing background activity during a meeting — cooking, cleaning, typing, etc. We achieved 81.9% macro accuracy on identifying six common background activities using intercepted outgoing telemetry packets when a user is muted.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/31802022-05-02T09:20:35-04:002022-05-02T09:20:35-04:00https://talks.cs.umd.edu/talks/3180An Analysis of the Role of Situated Learning in Starting a Security Culture in a Software Company by Tuladhar et alKelsey Fulton<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, May 3, 2022, 2:00-3:00 pm<br><br><b>Abstract:</b> <p><span style="color: #333333; font-family: 'Open Sans', Arial, Helvetica, sans-serif; font-size: 16px;">We conducted an ethnographic study of a software development company to explore if and how a development team adopts security practices into the development lifecycle. A PhD student in computer science with prior training in qualitative research methods was embedded in the company for eight months. The researcher joined the company as a software engineer and participated in all development activities as a new hire would, while also making observations on the development practices. During the fieldwork, we observed a positive shift in the development team's practices regarding secure development. Our analysis of data indicates that the shift can be attributed to enabling all software engineers to see how security knowledge could be applied to the specific software products they worked on. We also observed that by working with other developers to apply security knowledge under the concrete context where the software products were built, developers who possessed security expertise and wanted to push for more secure development practices (security advocates) could be effective in achieving this goal. Our data point to an interactive learning process where software engineers in a development team acquire knowledge, apply it in practice, and contribute to the team, leading to the creation of a set of preferred practices, or "culture" of the team. This learning process can be understood through the lens of the situated learning framework, where it is recognized that knowledge transfer happens within a community of practice, and applying the knowledge is the key in individuals (software engineers) acquiring it and the community (development team) embodying such knowledge in its practice. Our data show that enabling a situated learning environment for security gives rise to security-aware software engineers. We discuss the roles of management and security advocates in driving the learning process to start a security culture in a software company.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/32662022-09-12T14:19:28-04:002022-09-12T14:19:28-04:00https://talks.cs.umd.edu/talks/3266A Large-scale Temporal Measurement of Android Malicious Apps: Persistence, Migration, and Lessons LearnedWentao Guo<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5165 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, September 13, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div dir="ltr"><span style="font-size: small;">Wentao Guo will lead discussion on the paper “<a href="https://www.usenix.org/conference/usenixsecurity22/presentation/shen-yun" rel="noopener">A Large-scale Temporal Measurement of Android Malicious Apps: Persistence, Migration, and Lessons Learned</a>.” This week only, we'll be in <strong>IRB 5165</strong>, so hope to see you there at 12:30pm in IRB 5165 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want pizza, please fill out <a href="https://forms.gle/vFJqn8fobLhzudg27" rel="noopener">this form</a> before 10am Tuesday.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/32732022-09-19T11:08:49-04:002022-09-19T11:08:49-04:00https://talks.cs.umd.edu/talks/3273Online Website Fingerprinting: Evaluating Website Fingerprinting Attacks on Tor in the Real WorldSadia Nourin<br><br>Tuesday, September 20, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-size: small;">At reading group tomorrow, 9/20, Sadia Nourin will lead discussion on the paper “<a href="https://www.usenix.org/conference/usenixsecurity22/presentation/cherubin">Online Website Fingerprinting: Evaluating Website Fingerprinting Attacks on Tor in the Real World</a>.” Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want a falafel pita, please fill out <a href="https://forms.gle/vFJqn8fobLhzudg27" rel="noopener">this form</a> before 10am tomorrow.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/32822022-09-26T10:48:08-04:002022-09-26T10:48:08-04:00https://talks.cs.umd.edu/talks/3282Spoki: Unveiling a New Wave of Scanners through a Reactive Network TelescopeErik Rye<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, September 27, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div>At reading group tomorrow, 9/27, Erik Rye will lead discussion on the paper “<a id="gmail-docs-internal-guid-d8cbae7b-7fff-9bbd-7285-5eba41f5bdad" href="http://usenix.org/conference/usenixsecurity22/presentation/hiesgen">Spoki: Unveiling a New Wave of Scanners through a Reactive Network Telescope</a>.” Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want pizza, please fill out <a href="https://forms.gle/dMDLRQaTvS9qYV6a6" rel="noopener">this form</a> before 10am tomorrow.</div>
<div> </div>
<div>We need someone for next week, so please sign up to lead discussions <a href="https://docs.google.com/document/d/1JeB79Pc2cADNWSx5LdAbPnvO5Gc8jIpRcz8zUkBYJNs/edit?usp=sharing" rel="noopener">here</a>! Reach out if you have questions.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/32962022-10-04T01:01:33-04:002022-10-04T01:01:33-04:00https://talks.cs.umd.edu/talks/3296Attacks on Deidentification's DefensesDavid Miller<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 4, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-size: small;">David Miller will lead discussion on the paper “<a href="https://www.usenix.org/conference/usenixsecurity22/presentation/cohen" rel="noopener">Attacks on Deidentification's Defenses</a>.” Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want a sandwich, please fill out <a href="https://forms.gle/1xS2N74dwkxgEACo9" rel="noopener">this form</a> before 10am.</div>
<div> </div>
<div>Please sign up to lead discussions <a href="https://docs.google.com/document/d/1JeB79Pc2cADNWSx5LdAbPnvO5Gc8jIpRcz8zUkBYJNs/edit?usp=sharing" rel="noopener">here</a>! Reach out if you have questions.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/33132022-10-17T13:53:19-04:002022-10-17T13:53:19-04:00https://talks.cs.umd.edu/talks/3313Foundations of Coin Mixing ServicesNoemi Glaeser<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 18, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">At reading group tomorrow, 10/18, Noemi Glaeser will lead discussion on her paper "<a id="m_-6082310690270910506gmail-docs-internal-guid-378c6470-7fff-f385-862c-170ff5f93c96" style="text-decoration: none;" href="https://eprint.iacr.org/2022/942" rel="noopener"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Foundations of Coin Mixing Services</span></a>.” Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want a naan wrap, please fill out <a href="https://forms.gle/a6cbaVcFzoymU4Qy6" rel="noopener">this form</a> before 10am.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/33212022-10-24T08:53:14-04:002022-10-24T08:53:14-04:00https://talks.cs.umd.edu/talks/3321zkPoL: Zero-Knowledge Proof-of-LearningKasra Abbaszadeh<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 25, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-size: small;">At reading group tomorrow, 10/25, Kasra Abbaszadeh will present on his not-yet-published work "zkPoL: Zero-Knowledge Proof-of-Learning.” Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span></div>
<div> </div>
<div>If you want pizza, please fill out <a href="https://forms.gle/uhvKUMwzc9vSrUV68" rel="noopener">this form</a> <strong>before 7pm tonight</strong>. Going forward, I'm going to ask for food RSVPs by Monday evening, not Tuesday morning, because some places don't open until late on Tuesdays.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/33302022-10-31T09:36:55-04:002022-10-31T09:36:55-04:00https://talks.cs.umd.edu/talks/3330No Privacy Among Spies: Assessing the Functionality and Insecurity of Consumer Android Spyware AppsJulio Poveda<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 1, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-size: small;">At reading group tomorrow, 11/1, Julio Poveda will lead discussion on the paper "<span style="font-family: arial,sans-serif;"><a id="m_-6357910955734922538gmail-docs-internal-guid-48ed7c43-7fff-c5df-2909-fdc366f9a70e" style="text-decoration: none;" href="https://cseweb.ucsd.edu/~savage/papers/PETS23.pdf" rel="noopener"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">No Privacy Among Spies: Assessing the Functionality and Insecurity of Consumer Android Spyware Apps</span></a></span></span>."<span style="font-size: small;"><span style="font-family: arial,sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want tacos, please fill out <a href="https://forms.gle/bgz4pMA3tmZ3KQDZA" rel="noopener">this form</a> <strong>before 7pm tonight</strong>.</div>
<div> </div>
<div>Please sign up to lead discussions <a href="https://docs.google.com/document/d/1JeB79Pc2cADNWSx5LdAbPnvO5Gc8jIpRcz8zUkBYJNs/edit?usp=sharing" rel="noopener">here</a>! Reach out if you have questions.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/33382022-11-07T13:28:10-05:002022-11-07T13:28:10-05:00https://talks.cs.umd.edu/talks/3338Erica Blum's proto-job talk on cryptography researchErica Blum<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 8, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-size: small;">At reading group tomorrow, 11/08, Erica Blum will give a proto-job talk on her research in cryptography.<span style="font-family: arial, sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want pizza, please fill out <a href="https://forms.gle/71JYnrKyBugz96Fz7" rel="noopener">this form</a> before 7pm tonight (this is a late announcement, so I'll probably order a little extra).</div>
<p><br>Wentao</p>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/33452022-11-14T11:04:04-05:002022-11-14T11:04:04-05:00https://talks.cs.umd.edu/talks/3345TSPU: Russia’s Decentralized Censorship SystemAaron Ortwein<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 15, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div>Hi everyone,</div>
<div>
<div> </div>
<div><span style="font-size: small;">At reading group tomorrow, 11/15, Aaron Ortwein will lead discussion on the paper "<a id="gmail-docs-internal-guid-1e065dde-7fff-b1c3-52ed-25cd087c4683" style="text-decoration: none;" href="https://dl.acm.org/doi/abs/10.1145/3517745.3561461"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">TSPU: Russia’s Decentralized Censorship System</span></a>."<span style="font-family: arial, sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want tacos, please fill out <a href="https://forms.gle/9wWkKfsDux9Thzyi9" rel="noopener">this form</a> <strong>before 8pm tonight</strong>.</div>
<div> </div>
<div>Wentao</div>
</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/33582022-11-28T10:45:51-05:002022-11-28T10:45:51-05:00https://talks.cs.umd.edu/talks/3358Opportunities and Challenges of using Cryptography for CPS SecurityAndreea Alexandru<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 29, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <div>
<div><span style="font-size: small;">At reading group tomorrow, 11/29, Andreea Alexandru will give a workshop talk on "Opportunities and Challenges of using Cryptography for CPS Security."<span style="font-family: arial, sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want pizza, please fill out <a href="https://forms.gle/aHW2Y4NBa1SvLnWJ8" rel="noopener">this form</a> <strong>before 8pm tonight</strong>.</div>
</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/33642022-12-05T10:23:36-05:002022-12-05T10:23:36-05:00https://talks.cs.umd.edu/talks/3364When Frodo Flips: End-to-End Key Recovery on FrodoKEM via RowhammerHunter Kippen<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, December 6, 2022, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">At reading group tomorrow, 12/6, Hunter Kippen will give a talk on "<a id="m_8533104867462790134gmail-docs-internal-guid-601a9f7f-7fff-e50f-36b4-98fb1578554b" style="text-decoration: none;" href="https://dl.acm.org/doi/10.1145/3548606.3560673" rel="noopener"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">When Frodo Flips: End-to-End Key Recovery on FrodoKEM via Rowhammer</span></a>."<span style="font-family: arial,sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want tacos, please fill out <a href="https://forms.gle/QYpK1Yc9GTTmaAJs5" rel="noopener">this form</a> <strong>before 8pm tonight</strong>.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/34652023-03-13T09:58:46-04:002023-03-13T09:58:46-04:00https://talks.cs.umd.edu/talks/3465Clarion: Anonymous Communication from Multiparty Shuffling ProtocolsSaba Eskandarian<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 14, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-size: small;">At reading group tomorrow, 3/14, we have a guest from UNC Chapel Hill, Professor Saba Eskandarian. He will give a talk on "<a id="gmail-docs-internal-guid-78df162b-7fff-62f1-e320-149d6848f315" style="text-decoration: none;" href="https://www.ndss-symposium.org/wp-content/uploads/2022-141-paper.pdf"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Clarion: Anonymous Communication from Multiparty Shuffling Protocols</span></a>."<span style="font-family: arial, sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want tacos, please fill out <a href="https://forms.gle/YgjaaLPTUJNDjGXP6" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/34122023-02-13T09:19:19-05:002023-02-13T09:19:19-05:00https://talks.cs.umd.edu/talks/3412Using Undervolting as an On-Device Defense Against Adversarial Machine Learning AttacksDavid Miller<br><br>Tuesday, February 14, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">At reading group tomorrow, 2/14, David Miller will give a talk on "<a id="gmail-docs-internal-guid-023fb6b8-7fff-bb5f-6723-b9d6879641b9" style="text-decoration: none;" href="https://arxiv.org/abs/2107.09804"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Using Undervolting as an On-Device Defense Against Adversarial Machine Learning Attacks</span></a>."<span style="font-family: arial, sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want pizza, please fill out <a href="https://forms.gle/GUZpNEG66w8PZnL58" rel="noopener">this form</a> <strong>before 11am tomorrow</strong>.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/34242023-02-20T08:42:48-05:002023-02-20T08:42:48-05:00https://talks.cs.umd.edu/talks/3424How to Count Bots in Longitudinal Datasets of IP AddressesLeon Böck - TU Darmstadt<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 21, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">At reading group tomorrow, 2/21, we have a guest from TU Darmstadt, Leon Böck. He will give a talk on his upcoming NDSS paper "<a id="m_2628231484278995952gmail-docs-internal-guid-ee3f7701-7fff-659c-afe5-4203eac3b900" style="text-decoration: none;" href="https://drive.google.com/file/d/1BIQUIf3_h8ku3JFoNjpctMbOZQ5VQz2z/view?usp=sharing" rel="noopener"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">How to Count Bots in Longitudinal Datasets of IP Addresses</span></a>."<span style="font-family: arial,sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want tacos, please fill out <a href="https://forms.gle/WyT9cCqBFmZXXoH36" rel="noopener">this form</a> <strong>before 11am tomorrow</strong>.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/34372023-02-27T08:49:31-05:002023-02-27T08:49:31-05:00https://talks.cs.umd.edu/talks/3437Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code ContributionsNoel Warford<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 28, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">At reading group tomorrow, 2/28, Noel will give a talk on "<a id="gmail-docs-internal-guid-835ad919-7fff-5c02-4804-c7c8bde244a4" style="text-decoration: none;" href="https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=9833571"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Asleep at the Keyboard? Assessing the Security of GitHub Copilot's Code Contributions</span></a>."<span style="font-family: arial, sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want pizza, please fill out <a href="https://forms.gle/9YFcvXmsyHPNHP12A" rel="noopener">this form</a> <strong>before 11am tomorrow</strong>.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/34962023-04-10T13:39:42-04:002023-04-10T13:39:42-04:00https://talks.cs.umd.edu/talks/3496Assessing Anonymity Techniques Employed in German Court Decisions: A De-Anonymization ExperimentWentao Guo<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, April 11, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-family: arial,sans-serif; font-size: small;">At reading group tomorrow, 4/10, Wentao Guo will give a talk on "<a id="m_-2383652015949349113gmail-docs-internal-guid-82e2c9f3-7fff-230a-3424-b5a037e8965c" style="text-decoration: none;" href="https://www.usenix.org/conference/usenixsecurity23/presentation/deuber" rel="noopener"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Assessing Anonymity Techniques Employed in German Court Decisions: A De-Anonymization Experiment</span></a>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want pizza, please fill out <a href="https://forms.gle/ncHyNCys7ZkkAbWi8" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/34492023-03-06T11:55:32-05:002023-03-06T11:55:32-05:00https://talks.cs.umd.edu/talks/3449Security Foundations for Application-Based Covert Communication ChannelsErica Blum<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 7, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-size: small;">At reading group tomorrow, 3/7, Erica Blum will give a talk on "<a id="gmail-docs-internal-guid-16938f91-7fff-7afb-7098-92608118413b" style="text-decoration: none;" href="https://ieeexplore.ieee.org/document/9833752"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Security Foundations for Application-Based Covert Communication Channels</span></a>."<span style="font-family: arial, sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want a naan wrap, please fill out <a href="https://forms.gle/Cd4JEMZbowuukp5Z6" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/34852023-04-03T09:15:10-04:002023-04-03T09:15:10-04:00https://talks.cs.umd.edu/talks/3485Do Password Managers Nudge Secure (Random) Passwords?Phoebe Moh<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, April 4, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-size: small;">At reading group tomorrow, 4/4, Phoebe Moh will give a talk on "<a id="m_-4039508397640722455gmail-docs-internal-guid-d4aa3220-7fff-bd67-6a5d-a74d053da87c" style="text-decoration: none;" href="https://www.usenix.org/conference/soups2022/presentation/zibaei" rel="noopener"><span style="font-family: Arial; color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Do Password Managers Nudge Secure (Random) Passwords?</span></a>."<span style="font-family: arial,sans-serif;"> H</span>ope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>!</span> If you want pizza, please fill out <a href="https://forms.gle/m5bSzwhUazeoZ1k17" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/35092023-04-17T10:24:32-04:002023-04-17T10:24:32-04:00https://talks.cs.umd.edu/talks/3509Anti-Privacy and Anti-Security Advice on TikTok: Case Studies of Technology-Enabled Surveillance and Control in Intimate Partner and Parent-Child RelationshipsMiranda Wei<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, April 18, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <p><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 4/18, we have a guest joining virtually from the University of Washington, Miranda Wei. She will give a talk on her paper "<a id="gmail-docs-internal-guid-df7980a9-7fff-7d52-60ae-2d7fe57f7b9e" style="text-decoration: none;" href="https://www.usenix.org/conference/soups2022/presentation/wei"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Anti-Privacy and Anti-Security Advice on TikTok: Case Studies of Technology-Enabled Surveillance and Control in Intimate Partner and Parent-Child Relationships</span></a>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want a salad bowl, please fill out <a href="https://forms.gle/wK7gv8HMpFcMEcjc6" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/35182023-04-24T10:59:56-04:002023-04-24T10:59:56-04:00https://talks.cs.umd.edu/talks/3518Measuring and Evading Turkmenistan’s Internet CensorshipSadia Nourin<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, April 25, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 4/25, Sadia Nourin will give a talk on her paper "</span><span style="font-family: arial, sans-serif; font-size: small;"><a id="gmail-docs-internal-guid-06f52e7a-7fff-5caa-9a96-d4358c68f9e7" style="text-decoration: none;" href="https://arxiv.org/pdf/2304.04835.pdf"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Measuring and Evading Turkmenistan’s Internet Censorship</span></a><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://forms.gle/CHNLaGt5zXFrY3En9" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/35282023-05-01T13:04:47-04:002023-05-01T13:04:47-04:00https://talks.cs.umd.edu/talks/3528zk-creds: Flexible Anonymous Credentials from zkSNARKs and Existing Identity InfrastructureMichael Rosenberg<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, May 2, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 5/2, Michael Rosenberg will give a talk on his paper "</span><span style="font-size: small;"><a id="gmail-docs-internal-guid-0e51424a-7fff-3bb2-ae78-d7d51c4651dd" style="text-decoration: none; font-family: arial, sans-serif;" href="https://eprint.iacr.org/2022/878"><span style="color: #1155cc; background-color: #f8f8f8; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">zk-creds: Flexible Anonymous Credentials from zkSNARKs and Existing Identity Infrastructure</span></a><span style="font-family: arial, sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://forms.gle/7B1P5ezVH13prVe89" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/36082023-09-18T09:58:47-04:002023-09-18T09:58:47-04:00https://talks.cs.umd.edu/talks/3608Adventures in Recovery Land: Testing the Account Recovery of Popular Websites When the Second Factor is LostPhoebe Moh<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, September 19, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial,sans-serif; font-size: small;">At reading group tomorrow, 9/19, Phoebe Moh will give a talk on the </span><span style="font-family: arial,sans-serif; font-size: small;">paper "</span><span style="font-size: small;"><a id="m_-6774366661351911050gmail-docs-internal-guid-f81b2ebf-7fff-9921-2863-9371f940bd6a" style="text-decoration: none; font-family: arial,sans-serif;" href="https://www.usenix.org/conference/soups2023/presentation/gerlitz-adventures" rel="noopener"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Adventures in Recovery Land: Testing the Account Recovery of Popular Websites When the Second Factor is Lost</span></a><span style="font-family: arial,sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSd4EuAI6-sS8PD7wnIYFkgXqlvlDKB2WhZW2ZDBPXg8X1YQbg/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/36152023-09-25T10:32:05-04:002023-09-25T10:32:05-04:00https://talks.cs.umd.edu/talks/3615Destination Unreachable: Characterizing Internet Outages and ShutdownsSadia Nourin<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, September 26, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 9/26, Sadia Nourin will give a talk on the </span><span style="font-family: arial, sans-serif; font-size: small;">paper </span><span style="font-family: arial, sans-serif; font-size: small;">"</span><span style="font-size: small;"><a id="gmail-docs-internal-guid-a2acac4a-7fff-c470-ca59-75c4d027e480" style="text-decoration: none; font-family: arial, sans-serif;" href="https://cseweb.ucsd.edu/~snoeren/papers/shutdown-sigcomm23.pdf"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Destination Unreachable: Characterizing Internet Outages and Shutdowns</span></a><span style="font-family: arial, sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSdmREkCloFzg6hc8QULkTZLQN_bANaq7-TOlvWaVTfOV_x_8w/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 9:30am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/36222023-10-02T09:30:48-04:002023-10-02T09:30:48-04:00https://talks.cs.umd.edu/talks/3622An Audit of Facebook's Political Ad Policy EnforcementNoemi Glaeser<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 3, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 10/3, </span>Noemi Glaeser<span style="font-family: arial, sans-serif; font-size: small;"> will give a talk on the </span><span style="font-family: arial, sans-serif; font-size: small;">paper </span><span style="font-family: arial, sans-serif; font-size: small;">"</span><span style="font-size: small;"><a id="m_4972264836015870724gmail-docs-internal-guid-a2acac4a-7fff-c470-ca59-75c4d027e480" style="text-decoration: none; font-family: arial, sans-serif;" href="https://www.usenix.org/conference/usenixsecurity22/presentation/lepochat" rel="noopener"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">An Audit of Facebook's Political Ad Policy Enforcement</span></a><span style="font-family: arial, sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSfNBKppVHHOVd1cHfSf_pGmFs9yxAN_J7OxS8VmYZ_mVVJk6g/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 9:30am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/36322023-10-09T09:41:56-04:002023-10-10T09:18:06-04:00https://talks.cs.umd.edu/talks/3632Lessons Learned: Surveying the Practicality of Differential Privacy in the IndustryWentao Guo<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 10, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 10/10, </span>Wentao Guo<span style="font-family: arial, sans-serif; font-size: small;"> will give a talk on the </span><span style="font-family: arial, sans-serif; font-size: small;">pap</span><span style="font-family: arial, sans-serif; font-size: small;">er "</span><span style="font-family: arial, sans-serif; font-size: small;"><a id="gmail-docs-internal-guid-d19785b7-7fff-12ba-0d2a-5921312f136a" style="text-decoration: none;" href="https://petsymposium.org/popets/2023/popets-2023-0045.php"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Lessons Learned: Surveying the Practicality of Differential Privacy in the Industry</span></a></span><a id="gmail-docs-internal-guid-ac79d9b8-7fff-e838-5ec3-d8f1bd8a6c51" style="text-decoration: none; font-family: arial, sans-serif;" href="https://www.usenix.org/conference/usenixsecurity23/presentation/shan"></a><a id="gmail-docs-internal-guid-d19785b7-7fff-12ba-0d2a-5921312f136a" style="text-decoration: none;" href="https://petsymposium.org/popets/2023/popets-2023-0045.php"></a><span style="font-size: small;"><span style="font-family: arial, sans-serif;">." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSe5n78qdkYYN9mMPtffqT4z_BeiTpN9w8HFystOI2d_O6cPlQ/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></div><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/36562023-10-23T09:48:55-04:002023-10-23T09:48:55-04:00https://talks.cs.umd.edu/talks/3656Using AI for classification of software vulnerabilities and malware detectionKaterina Goseva-Popstojanova<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, October 24, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At <span class="gmail-il">reading</span> <span class="gmail-il">group</span> tomorrow, 10/24, we have an invited speaker, </span>Katerina Goseva-Popstojanova from <span style="font-family: arial, sans-serif; font-size: small;">West Virginia University, who will give a talk entitled</span><span style="font-family: arial, sans-serif; font-size: small;"> "</span>Using AI for classification of software vulnerabilities and malware detection<span style="font-size: small;"><span style="font-family: arial, sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSemyayg3zjJYJM3eJg-CujBg955IwNSwplYVgCT-i-0CABO9Q/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></span></div>
<p> </p>
<p>Abstract: Today’s society heavily relies on dependable operation of software and systems. Dr. Goseva- Popstojanova’s research is focused on development of experimental, analytical, and AI techniques for quantitative assessment and assurance of reliable and secure software and systems. This talk will be centered on two recent research efforts: (1) characterization and automatic classification of software vulnerabilities and (2) malware detection using multimodal machine learning.</p><br><b>Bio:</b> <p>Dr. Katerina Goseva-Popstojanova is a Professor at the Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, USA. Her research interests are in the areas of software engineering, cybersecurity, applied data analytics, and higher education in these areas. She received the National Science Foundation (NSF) CAREER award in 2005 and has served as a Principal Investigator on various NSF, NASA, Department of Defense (DoD), and industry funded projects. She is serving as an Academic Coordinator of the M.S. in Software Engineering program and leading the B.S. in Cybersecurity program at West Virginia University. She serves as an Associate Editor of the IEEE Transactions on Reliability and was a Program co-Chair of ISSRE 2007 and QRS 2021, and a General co-Chair of ISSRE 2022.</p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/36712023-11-06T08:30:47-05:002023-11-06T08:30:47-05:00https://talks.cs.umd.edu/talks/3671Works in progress: cryptography and de-identificationKhalil Guy and Wentao Guo<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 7, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 11/07, we're trying out a new format with two work-in-progress talks</span><span style="font-family: arial, sans-serif; font-size: small;">: Khalil will give a talk on his class project for Cryptography and Hostile Governments, and I will give a talk on my project interviewing people who de-identify data</span><span style="font-size: small;"><span style="font-family: arial, sans-serif;"><span>. The goal is to get plenty of constructive feedback.</span></span></span></div>
<div><span style="font-size: small;"><span style="font-family: arial, sans-serif;"><span> </span></span></span></div>
<div><span style="font-size: small;"><span style="font-family: arial, sans-serif;"><span>Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSfuR8v378v-SpqFQ6gpxZfFz4Bsgq7mR3cQaO1gp_QWZ_-z8A/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/36942023-11-13T08:43:01-05:002023-11-13T08:43:01-05:00https://talks.cs.umd.edu/talks/3694Glaze: Protecting Artists from Style Mimicry by Text-to-Image ModelsNoel Warford<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 14, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial,sans-serif; font-size: small;">At reading group tomorrow, 11/14, </span>Noel Warford<span style="font-family: arial,sans-serif; font-size: small;"> will give a talk on the </span><span style="font-family: arial,sans-serif; font-size: small;">pap</span><span style="font-family: arial,sans-serif; font-size: small;">er "</span><span style="font-size: small;"><a id="m_-5907641820514934774m_4405964568890655170gmail-docs-internal-guid-ac79d9b8-7fff-e838-5ec3-d8f1bd8a6c51" style="text-decoration: none; font-family: arial,sans-serif;" href="https://www.usenix.org/conference/usenixsecurity23/presentation/shan" rel="noopener"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Glaze: Protecting Artists from Style Mimicry by Text-to-Image Models</span></a><span style="font-family: arial,sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSefGRqWoEmaRlV4BNmf9rlXI0CD72ZNkpHIRjIgroavmuIxBw/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/37002023-11-27T10:50:01-05:002023-11-27T10:50:01-05:00https://talks.cs.umd.edu/talks/3700Data Poisoning Won't Save You From Facial RecognitionTasos Toumazatos<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, November 28, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 11/28, </span>Tasos Toumazatos<span style="font-family: arial, sans-serif; font-size: small;"> will give a talk on the </span><span style="font-family: arial, sans-serif; font-size: small;">pap</span><span style="font-family: arial, sans-serif; font-size: small;">er "</span><span style="font-size: small;"><a id="gmail-docs-internal-guid-9355ed6d-7fff-0b38-dc13-00d7a9d02f74" style="text-decoration: none; font-family: arial, sans-serif;" href="https://arxiv.org/pdf/2106.14851.pdf"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Data Poisoning Won't Save You From Facial Recognition</span></a><span style="font-family: arial, sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSd9alP-kZaQg4ZLxzDnjUc5GC_3uQz2Ahg9ELIvxqGKwUlh7Q/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/37042023-12-04T11:01:22-05:002023-12-04T11:01:22-05:00https://talks.cs.umd.edu/talks/3704Analysis of Google Ads Settings Over Time: Updated, Individualized, Accurate, and FilteredNathan Reitinger<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, December 5, 2023, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial,sans-serif; font-size: small;">At reading group tomorrow, 12/05, </span>Nathan Reitinger<span style="font-family: arial,sans-serif; font-size: small;"> will give a talk on his own </span><span style="font-family: arial,sans-serif; font-size: small;">pap</span><span style="font-family: arial,sans-serif; font-size: small;">er "</span><span style="font-size: small;"><a id="m_-6140692731366269547m_-6715815258538184228gmail-docs-internal-guid-9355ed6d-7fff-0b38-dc13-00d7a9d02f74" style="text-decoration: none; font-family: arial,sans-serif;" href="https://www.blaseur.com/papers/wpes23-google.pdf" rel="noopener"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Analysis of Google Ads Settings Over Time: Updated, Individualized, Accurate, and Filtered</span></a><span style="font-family: arial,sans-serif;"><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSdE9ZgZ9IdOwZEeC-90Ciy_iVIvxwCVfyZDteGIesHCqYcmgA/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 10am tomorrow</strong>.</span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/37572024-02-12T09:57:20-05:002024-02-12T09:57:20-05:00https://talks.cs.umd.edu/talks/3757Decoding the Secrets of Machine Learning in Windows Malware Classification: A Deep Dive into Datasets, Feature Extraction, and Model PerformanceSungsu Kwag<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 13, 2024, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 2/13, Sungsu Kwag</span><span style="font-family: arial, sans-serif; font-size: small;"> will give a talk on the </span><span style="font-family: arial, sans-serif; font-size: small;">pap</span><span style="font-family: arial, sans-serif; font-size: small;">er "</span><span style="font-size: small;"><a id="gmail-docs-internal-guid-158e0834-7fff-198e-8369-564c6054e9ac" style="text-decoration: none; font-family: arial, sans-serif;" href="https://dl.acm.org/doi/pdf/10.1145/3576915.3616589"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Decoding the Secrets of Machine Learning in Windows Malware Classification: A Deep Dive into Datasets, Feature Extraction, and Model Performance</span></a><span style="font-family: arial, sans-serif;"><span><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLSe15euEMZVgRFdj1jbvTOtXhkfLF4mFHfxenFy5lm7H1gDvpA/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 9:30am tomorrow</strong>.</span></span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/37952024-02-26T10:04:48-05:002024-02-26T10:04:48-05:00https://talks.cs.umd.edu/talks/3795Malla: Demystifying Real-world Large Language Model Integrated Malicious ServicesJulio Poveda<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, February 27, 2024, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial, sans-serif; font-size: small;">At reading group tomorrow, 2/27, Julio Poveda</span><span style="font-family: arial, sans-serif; font-size: small;"> will give a talk on the </span><span style="font-family: arial, sans-serif; font-size: small;">pap</span><span style="font-family: arial, sans-serif; font-size: small;">er "</span><span style="font-size: small;"><a id="m_-5377026238796430689gmail-docs-internal-guid-158e0834-7fff-198e-8369-564c6054e9ac" style="text-decoration: none; font-family: arial, sans-serif;" href="https://arxiv.org/pdf/2401.03315.pdf" rel="noopener"><span style="color: #1155cc; background-color: transparent; font-weight: 400; font-style: normal; font-variant: normal; text-decoration: underline; vertical-align: baseline; white-space: pre-wrap;">Malla: Demystifying Real-world Large Language Model Integrated Malicious Services</span></a><span style="font-family: arial, sans-serif;"><span><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLScrAdfNoXEC6O_pq2WZYR0leI3f8j5Uojjjf7o1j9q1_iVRrA/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 9:30am tomorrow</strong>.</span></span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/38122024-03-04T09:38:14-05:002024-03-04T09:38:14-05:00https://talks.cs.umd.edu/talks/3812Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language ModelsKamala Varma<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 5, 2024, 12:30-1:30 pm<br><br><b>Abstract:</b> <div>At reading group tomorrow, 3/5, Kamala Varma will give a practice talk on her paper "Understanding, Uncovering, and Mitigating the Causes of Inference Slowdown for Language Models." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLScPrTPH5i08WPu-HGqylSdMQcPN3XcHEuCyITXBIbdzlz17tA/viewform?usp=sf_link" rel="noopener">this form</a> before 9:30am tomorrow.</div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>tag:talks.cs.umd.edu,2005:Talk/38262024-03-25T09:29:43-04:002024-03-25T09:29:43-04:00https://talks.cs.umd.edu/talks/3826No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum to Suppress Hate and HarassmentEmma Shroyer<br><a href="https://maps.umd.edu/map/index.html?Nav=Hide&MapView=Detailed&NoWelcome=True&LocationType=Building&LocationName=432">5105 Brendan Iribe Center for Computer Science and Engineering (IRB)</a><br>Tuesday, March 26, 2024, 12:30-1:30 pm<br><br><b>Abstract:</b> <div><span style="font-family: arial,sans-serif; font-size: small;">At reading group tomorrow, 3/26, Emma Shroyer</span><span style="font-family: arial,sans-serif; font-size: small;"> will give a talk on the </span><span style="font-family: arial,sans-serif; font-size: small;">pap</span><span style="font-family: arial,sans-serif; font-size: small;">er "</span><a href="https://arxiv.org/pdf/2304.07037.pdf" rel="noopener">No Easy Way Out: the Effectiveness of Deplatforming an Extremist Forum to Suppress Hate and Harassment</a><span style="font-size: small;"><span style="font-family: arial,sans-serif;"><span><span>." Hope to see you there at 12:30pm in IRB 5105 or on <a href="https://umd.zoom.us/j/95986206096" rel="noopener">Zoom</a>! If you want lunch, please fill out <a href="https://docs.google.com/forms/d/e/1FAIpQLScHAbgmhOepw0ox8hCELpK-xJ9u-V0QO9WmuQgQ8lMAvO2TfQ/viewform?usp=sf_link" rel="noopener">this form</a> <strong>before 9:30am tomorrow</strong>.</span></span></span></span></div>
<p> </p><br>This talk is part of the following lists: <a href="https://talks.cs.umd.edu/lists/19">Security Reading Group</a><br>