%%% -*-BibTeX-*-
%%% ====================================================================
%%%  BibTeX-file{
%%%     author          = "Nelson H. F. Beebe",
%%%     version         = "1.15",
%%%     date            = "13 January 2015",
%%%     time            = "18:03:45 MDT",
%%%     filename        = "toct.bib",
%%%     address         = "University of Utah
%%%                        Department of Mathematics, 110 LCB
%%%                        155 S 1400 E RM 233
%%%                        Salt Lake City, UT 84112-0090
%%%                        USA",
%%%     telephone       = "+1 801 581 5254",
%%%     FAX             = "+1 801 581 4148",
%%%     URL             = "http://www.math.utah.edu/~beebe",
%%%     checksum        = "32755 2918 16594 150009",
%%%     email           = "beebe at math.utah.edu, beebe at acm.org,
%%%                        beebe at computer.org (Internet)",
%%%     codetable       = "ISO/ASCII",
%%%     keywords        = "bibliography, BibTeX, ACM Transactions on
%%%                        Computation Theory",
%%%     supported       = "no",
%%%     docstring       = "This is a COMPLETE BibTeX bibliography for
%%%                        the journal ACM Transactions on Computation
%%%                        Theory (CODEN ????, ISSN 1942-3454),
%%%                        covering all journal issues from 2009 --
%%%                        date.
%%%
%%%                        The journal has a World-Wide Web site at:
%%%
%%%                            http://www.acm.org/pubs/toct
%%%
%%%                        Tables-of-contents are available at:
%%%
%%%                            http://www.acm.org/pubs/contents/journals/toct/
%%%                            http://portal.acm.org/browse_dl.cfm?idx=J1190
%%%
%%%                        Qualified subscribers can retrieve the full
%%%                        text of recent articles in PDF form.
%%%
%%%                        At version 1.15, the COMPLETE journal
%%%                        coverage looked like this:
%%%
%%%                             2009 (   7)    2011 (   6)    2013 (  18)
%%%                             2010 (   5)    2012 (  15)    2014 (  16)
%%%
%%%                             Article:         67
%%%
%%%                             Total entries:   67
%%%
%%%                        The initial draft was extracted from the
%%%                        ACM Web site, with manual corrections and
%%%                        additions from bibliographies in the TeX
%%%                        User Group collection, the author's
%%%                        personal bibliography files, the Compendex
%%%                        database, and a very large computer science
%%%                        bibliography collection on ftp.ira.uka.de
%%%                        in /pub/bibliography to which many people
%%%                        of have contributed.  Where multiple
%%%                        sources of a particular entry existed,
%%%                        field values have been manually merged to
%%%                        preserve maximal information.  Missing
%%%                        entries were identified by software
%%%                        developed for the TeX User Group and BibNet
%%%                        bibliography archive projects, and were
%%%                        then supplied from the original journal
%%%                        issues.  Questions arising from conflicting
%%%                        data were resolved by consulting the
%%%                        original journal issues.
%%%
%%%                        ACM copyrights explicitly permit abstracting
%%%                        with credit, so article abstracts, keywords,
%%%                        and subject classifications have been
%%%                        included in this bibliography wherever
%%%                        available.  Article reviews have been
%%%                        omitted, until their copyright status has
%%%                        been clarified.
%%%
%%%                        The bibsource keys in the bibliography
%%%                        entries below indicate the data sources,
%%%                        usually the Karlsruhe computer science
%%%                        bibliography archive for the first two
%%%                        volumes, or the journal Web site or the
%%%                        Compendex database, both of which lack
%%%                        coverage of this journal before 1985.
%%%
%%%                        URL keys in the bibliography point to
%%%                        World Wide Web locations of additional
%%%
%%%                        Spelling has been verified with the UNIX
%%%                        spell and GNU ispell programs using the
%%%                        exception dictionary stored in the
%%%                        companion file with extension .sok.
%%%
%%%                        BibTeX citation tags are uniformly chosen
%%%                        as name:year:abbrev, where name is the
%%%                        family name of the first author or editor,
%%%                        year is a 4-digit number, and abbrev is a
%%%                        3-letter condensation of important title
%%%                        words. Citation tags were automatically
%%%                        generated by software developed for the
%%%                        BibNet Project.
%%%
%%%                        In this bibliography, entries are sorted in
%%%                        publication order, using bibsort -byvolume.''
%%%
%%%                        The checksum field above contains a CRC-16
%%%                        checksum as the first value, followed by the
%%%                        equivalent of the standard UNIX wc (word
%%%                        count) utility output of lines, words, and
%%%                        characters.  This is produced by Robert
%%%                        Solovay's checksum utility.",
%%%  }
%%% ====================================================================

@Preamble{"\input bibnames.sty"}

%%% ====================================================================
%%% Acknowledgement abbreviations:

@String{ack-nhfb = "Nelson H. F. Beebe,
University of Utah,
Department of Mathematics, 110 LCB,
155 S 1400 E RM 233,
Salt Lake City, UT 84112-0090, USA,
Tel: +1 801 581 5254,
FAX: +1 801 581 4148,
e-mail: \path|beebe@math.utah.edu|,
\path|beebe@acm.org|,
\path|beebe@computer.org| (Internet),
URL: \path|http://www.math.utah.edu/~beebe/|"}

%%% ====================================================================
%%% Journal abbreviations:

@String{j-TOCT                  = "ACM Transactions on Computation Theory"}

%%% ====================================================================
%%% Publisher abbreviations:

@String{pub-ACM                 = "ACM Press"}

@String{pub-ACM:adr             = "New York, NY 10036, USA"}

%%% ====================================================================
%%% Bibliography entries:

@Article{Fortnow:2009:EF,
author =       "Lance Fortnow",
title =        "{Editor}'s Foreword",
journal =      j-TOCT,
volume =       "1",
number =       "1",
pages =        "1:1--1:??",
month =        feb,
year =         "2009",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1490270.1490271",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Fri Apr 24 19:03:42 MDT 2009",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "1",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Aaronson:2009:ANB,
author =       "Scott Aaronson and Avi Wigderson",
title =        "Algebrization: {A} New Barrier in Complexity Theory",
journal =      j-TOCT,
volume =       "1",
number =       "1",
pages =        "2:1--2:??",
month =        feb,
year =         "2009",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1490270.1490272",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Fri Apr 24 19:03:42 MDT 2009",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "Any proof of P $\neq$ NP will have to overcome two
barriers: {\em relativization\/} and {\em natural
proofs}. Yet over the last decade, we have seen circuit
lower bounds (e.g., that PP does not have linear-size
circuits) that overcome both barriers simultaneously.
So the question arises of whether there is a third
barrier to progress on the central questions in
complexity theory.\par

call {\em algebraic relativization\/} or {\em
algebrization}. The idea is that, when we relativize
some complexity class inclusion, we should give the
simulating machine access not only to an oracle $A$,
but also to a low-degree extension of $A$ over a finite
field or ring.\par

We systematically go through basic results and open
problems in complexity theory to delineate the power of
the new algebrization barrier. First, we show that all
known nonrelativizing results based on
arithmetization---both inclusions such as IP = PSPACE
and MIP = NEXP, and separations such as MA$_{{EXP}_n}$
P/poly---do indeed algebrize. Second, we show that
almost all of the major open problems---including P
versus NP, P versus RP, and NEXP versus P/poly---will
require {\em non-algebrizing techniques}. In some
cases, algebrization seems to explain exactly why
progress stopped where it did: for example, why we have
superlinear circuit lower bounds for PromiseMA but not
for NP.\par

Our second set of results follows from lower bounds in
a new model of {\em algebraic query complexity}, which
in its own right. Some of our lower bounds use direct
combinatorial and algebraic arguments, while others
stem from a surprising connection between our model and
communication complexity. Using this connection, we are
also able to give an MA-protocol for the Inner Product
function with $O(\sqrt{n} \log n)$ communication
(essentially matching a lower bound of Klauck), as well
as a communication complexity conjecture whose truth
would imply NL $\neq$ NP.",
acknowledgement = ack-nhfb,
articleno =    "2",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
keywords =     "arithmetization; communication complexity; interactive
proofs; Low-degree polynomials; oracles; query
complexity",
}

@Article{Razborov:2009:SPB,
author =       "Alexander Razborov",
title =        "A Simple Proof of {Bazzi's Theorem}",
journal =      j-TOCT,
volume =       "1",
number =       "1",
pages =        "3:1--3:??",
month =        feb,
year =         "2009",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1490270.1490273",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Fri Apr 24 19:03:42 MDT 2009",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "Linial and Nisan [1990] asked if any polylog-wise
independent distribution fools any function in AC$^0$.
In a recent remarkable development, Bazzi solved this
problem for the case of DNF formulas. The aim of this
note is to present a simplified version of his proof.",
acknowledgement = ack-nhfb,
articleno =    "3",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
keywords =     "DNF; Pseudo-random generators",
}

@Article{Bourke:2009:DPR,
author =       "Chris Bourke and Raghunath Tewari and N. V.
Vinodchandran",
title =        "Directed Planar Reachability Is in Unambiguous
Log-Space",
journal =      j-TOCT,
volume =       "1",
number =       "1",
pages =        "4:1--4:??",
month =        feb,
year =         "2009",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1490270.1490274",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Fri Apr 24 19:03:42 MDT 2009",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We make progress in understanding the complexity of
the graph reachability problem in the context of
unambiguous logarithmic space computation; a restricted
form of nondeterminism. As our main result, we show a
new upper bound on the {\em directed planar
reachability problem\/} by showing that it can be
decided in the class unambiguous logarithmic space
(UL). We explore the possibility of showing the same
upper bound for the general graph reachability problem.
We give a simple reduction showing that the
reachability problem for directed graphs with thickness
two is complete for the class nondeterministic
logarithmic space (NL). Hence an extension of our
results to directed graphs with thickness two will
unconditionally collapse NL to UL.",
acknowledgement = ack-nhfb,
articleno =    "4",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
keywords =     "Directed graph reachability; planar graphs;
unambiguous log-space",
}

@Article{David:2009:ISB,
author =       "Matei David and Toniann Pitassi and Emanuele Viola",
title =        "Improved Separations between Nondeterministic and
Randomized Multiparty Communication",
journal =      j-TOCT,
volume =       "1",
number =       "2",
pages =        "5:1--5:??",
month =        sep,
year =         "2009",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1595391.1595392",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Mar 16 10:08:03 MDT 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "5",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Guruswami:2009:HSS,
author =       "Venkatesan Guruswami and Prasad Raghavendra",
title =        "Hardness of Solving Sparse Overdetermined Linear
Systems: {A} 3-Query {PCP} over Integers",
journal =      j-TOCT,
volume =       "1",
number =       "2",
pages =        "6:1--6:??",
month =        sep,
year =         "2009",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1595391.1595393",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Mar 16 10:08:03 MDT 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "6",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Ben-Sasson:2009:SQP,
author =       "Eli Ben-Sasson and Prahladh Harsha and Oded Lachish
and Arie Matsliah",
title =        "Sound 3-Query {PCPPs} Are Long",
journal =      j-TOCT,
volume =       "1",
number =       "2",
pages =        "7:1--7:??",
month =        sep,
year =         "2009",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1595391.1595394",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Mar 16 10:08:03 MDT 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "7",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Kyncl:2010:LRD,
author =       "Jan Kyn{\v{c}}l and Tom{\'a}{\v{s}} Vysko{\v{c}}il",
title =        "Logspace Reduction of Directed Reachability for
Bounded Genus Graphs to the Planar Case",
journal =      j-TOCT,
volume =       "1",
number =       "3",
pages =        "8:1--8:??",
month =        mar,
year =         "2010",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1714450.1714451",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Mar 16 10:08:04 MDT 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "8",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Beame:2010:FCD,
author =       "Paul Beame and Russell Impagliazzo and Toniann Pitassi
and Nathan Segerlind",
title =        "Formula Caching in {DPLL}",
journal =      j-TOCT,
volume =       "1",
number =       "3",
pages =        "9:1--9:??",
month =        mar,
year =         "2010",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1714450.1714452",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Mar 16 10:08:04 MDT 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "9",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Datta:2010:PDP,
author =       "Samir Datta and Raghav Kulkarni and Nutan Limaye and
Meena Mahajan",
title =        "Planarity, Determinants, Permanents, and (Unique)
Matchings",
journal =      j-TOCT,
volume =       "1",
number =       "3",
pages =        "10:1--10:??",
month =        mar,
year =         "2010",
CODEN =        "????",
DOI =          "http://doi.acm.org/10.1145/1714450.1714453",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Mar 16 10:08:04 MDT 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "10",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Yin:2010:CPP,
author =       "Yitong Yin",
title =        "Cell-Probe Proofs",
journal =      j-TOCT,
volume =       "2",
number =       "1",
pages =        "1:1--1:??",
month =        nov,
year =         "2010",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/1867719.1867720",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Nov 23 11:20:48 MST 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "1",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Hoefer:2010:TAC,
author =       "Martin Hoefer and Alexander Souza",
title =        "Tradeoffs and Average-Case Equilibria in Selfish
Routing",
journal =      j-TOCT,
volume =       "2",
number =       "1",
pages =        "2:1--2:??",
month =        nov,
year =         "2010",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/1867719.1867721",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Tue Nov 23 11:20:48 MST 2010",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "2",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Purdy:2011:LBC,
author =       "Eric Purdy",
title =        "Lower Bounds for Coin-Weighing Problems",
journal =      j-TOCT,
volume =       "2",
number =       "2",
pages =        "3:1--3:??",
month =        mar,
year =         "2011",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/1944857.1944858",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Mon Mar 28 09:50:20 MDT 2011",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "3",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Arvind:2011:SGI,
author =       "Vikraman Arvind and Jacobo Tor{\'a}n",
title =        "Solvable Group Isomorphism Is (Almost) in {NP} $\cap$
{coNP}",
journal =      j-TOCT,
volume =       "2",
number =       "2",
pages =        "4:1--4:??",
month =        mar,
year =         "2011",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/1944857.1944859",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Mon Mar 28 09:50:20 MDT 2011",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "4",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Fellows:2011:CDF,
author =       "Michael R. Fellows and Jiong Guo and Hannes Moser and
Rolf Niedermeier",
title =        "A Complexity Dichotomy for Finding Disjoint Solutions
of Vertex Deletion Problems",
journal =      j-TOCT,
volume =       "2",
number =       "2",
pages =        "5:1--5:??",
month =        mar,
year =         "2011",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/1944857.1944860",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Mon Mar 28 09:50:20 MDT 2011",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "5",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Hitchcock:2011:KCR,
author =       "John M. Hitchcock and A. Pavan and N. V.
Vinodchandran",
title =        "{Kolmogorov} Complexity in Randomness Extraction",
journal =      j-TOCT,
volume =       "3",
number =       "1",
pages =        "1:1--1:??",
month =        aug,
year =         "2011",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2003685.2003686",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Mon Sep 5 16:58:07 MDT 2011",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "1",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Kulkarni:2011:PIP,
author =       "Raghav Kulkarni",
title =        "On the Power of Isolation in Planar Graphs",
journal =      j-TOCT,
volume =       "3",
number =       "1",
pages =        "2:1--2:??",
month =        aug,
year =         "2011",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2003685.2003687",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Mon Sep 5 16:58:07 MDT 2011",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "2",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Smyth:2011:AQC,
author =       "Clifford Smyth",
title =        "Approximate Query Complexity",
journal =      j-TOCT,
volume =       "3",
number =       "1",
pages =        "3:1--3:??",
month =        aug,
year =         "2011",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2003685.2003688",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
bibdate =      "Mon Sep 5 16:58:07 MDT 2011",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
acknowledgement = ack-nhfb,
articleno =    "3",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Cook:2012:PBP,
author =       "Stephen Cook and Pierre McKenzie and Dustin Wehr and
Mark Braverman and Rahul Santhanam",
title =        "Pebbles and Branching Programs for Tree Evaluation",
journal =      j-TOCT,
volume =       "3",
number =       "2",
pages =        "4:1--4:??",
month =        jan,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2077336.2077337",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Fri Mar 16 15:22:48 MDT 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We introduce the tree evaluation problem, show that it
is in LogDCFL (and hence in $P$), and study its
branching program complexity in the hope of eventually
proving a superlogarithmic space lower bound. The input
to the problem is a rooted, balanced $d$-ary tree of
height h, whose internal nodes are labeled with $d$-ary
functions on $[ k ] = {1,\ldots{} , k}$, and whose
leaves are labeled with elements of $[ k ]$. Each node
obtains a value in $[ k ]$ equal to its $d$-ary
function applied to the values of its $d$ children. The
output is the value of the root. We show that the
standard black pebbling algorithm applied to the binary
tree of height h yields a deterministic $k$-way
branching program with $O(k h)$ states solving this
problem, and we prove that this upper bound is tight
for $h = 2$ and $h = 3$. We introduce a simple semantic
restriction called thrifty on $k$-way branching
programs solving tree evaluation problems and show that
the same state bound of $\Theta ( k h)$ is tight for
all $h \geq 2$ for deterministic thrifty programs. We
introduce fractional pebbling for trees and show that
this yields nondeterministic thrifty programs with
$\Theta(k h/2 + 1)$ states solving the Boolean problem
determine whether the root has value 1', and prove
that this bound is tight for $h = 2, 3, 4$. We also
prove that this same bound is tight for unrestricted
nondeterministic $k$-way branching programs solving the
Boolean problem for $h = 2, 3$.",
acknowledgement = ack-nhfb,
articleno =    "4",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Gal:2012:TQL,
author =       "Anna Gal and Andrew Mills",
title =        "Three-Query Locally Decodable Codes with Higher
Correctness Require Exponential Length",
journal =      j-TOCT,
volume =       "3",
number =       "2",
pages =        "5:1--5:??",
month =        jan,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2077336.2077338",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Fri Mar 16 15:22:48 MDT 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "Locally decodable codes are error-correcting codes
with the extra property that, in order to retrieve the
value of a single input position, it is sufficient to
read a small number of positions of the codeword. We
refer to the probability of getting the correct value
as the correctness of the decoding algorithm. A
breakthrough result by Yekhanin [2007] showed that
3-query linear locally decodable codes may have
subexponential length. The construction of Yekhanin,
and the three query constructions that followed,
achieve correctness only up to a certain limit which is
1--3 $\delta$ for nonbinary codes, where an adversary
is allowed to corrupt up to $\delta$ fraction of the
codeword. The largest correctness for a subexponential
length 3-query binary code is achieved in a
construction by Woodruff [2008], and it is below 1--3
$\delta$. We show that achieving slightly larger
correctness (as a function of $\delta$) requires
exponential codeword length for 3-query codes.
Previously, there were no larger than quadratic lower
bounds known for locally decodable codes with more than
2 queries, even in the case of 3-query linear codes.
Our lower bounds hold for linear codes over arbitrary
finite fields and for binary nonlinear codes.
Considering larger number of queries, we obtain lower
bounds for $q$-query codes for $q > 3$, under certain
assumptions on the decoding algorithm that have been
commonly used in previous constructions. We also prove
bounds on the largest correctness achievable by these
decoding algorithms, regardless of the length of the
code. Our results explain the limitations on
correctness in previous constructions using such
decoding algorithms. In addition, our results imply
trade-offs on the parameters of error-correcting data
structures.",
acknowledgement = ack-nhfb,
articleno =    "5",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Beame:2012:VMR,
author =       "Paul Beame and Trinh Huynh",
title =        "The Value of Multiple {Read\slash} Write Streams for
Approximating Frequency Moments",
journal =      j-TOCT,
volume =       "3",
number =       "2",
pages =        "6:1--6:??",
month =        jan,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2077336.2077339",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Fri Mar 16 15:22:48 MDT 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We consider the read/write streams model, an extension
of the standard data stream model in which an algorithm
can create and manipulate multiple read/write streams
in addition to its input data stream. Like the data
stream model, the most important parameter for this
model is the amount of internal memory used by such an
algorithm. The other key parameters are the number of
streams the algorithm uses and the number of passes it
makes on these streams. We consider how the addition of
multiple streams impacts the ability of algorithms to
approximate the frequency moments of the input stream.
We show that any randomized read/write stream algorithm
with a fixed number of streams and a sublogarithmic
number of passes that produces a constant factor
approximation of the $k$ -th frequency moment $F_k$ of
an input sequence of length of at most $N$ from
$\{1,\ldots{},N\}$ requires space $\Omega(N^{1 - 4/k - \delta})$ for any $\delta > 0$. For comparison, it is
known that with a single read-only one-pass data stream
there is a randomized constant-factor approximation for
$F_k$ using $\tilde{O}(N^{1 - 2/k})$ space, and that by
sorting, which can be done deterministically in $O(\log N)$ passes using $3$ read/write streams, $F_k$ can be
computed exactly. Therefore, although the ability to
substantial power to the data stream model, with a
sublogarithmic number of passes this does not
significantly improve the ability to approximate higher
frequency moments efficiently. We obtain our results by
showing a new connection between the read/write streams
model and the multiparty number-in-hand communication
model.",
acknowledgement = ack-nhfb,
articleno =    "6",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Tani:2012:EQA,
author =       "Seiichiro Tani and Hirotada Kobayashi and Keiji
Matsumoto",
title =        "Exact Quantum Algorithms for the Leader Election
Problem",
journal =      j-TOCT,
volume =       "4",
number =       "1",
pages =        "1:1--1:??",
month =        mar,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2141938.2141939",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:48 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
classical models in pure (i.e., noncryptographic)
computing abilities with no restriction on the amount
of available computing resources, by considering the
exact solvability of the leader election problem in
anonymous networks, a celebrated unsolvable problem in
classical distributed computing. The goal of the leader
election problem is to elect a unique leader from among
distributed parties. In an anonymous network, all
parties with the same number of communication links are
identical. It is well-known that no classical algorithm
can exactly solve (i.e., in bounded time without error)
the leader election problem in anonymous networks, even
a quantum algorithm that, if the number of parties is
given, exactly solves the problem for any network
topology in polynomial rounds with polynomial
communication/time complexity with respect to the
number of parties, when the parties are connected with
quantum communication links and they have the ability
of quantum computing. Our algorithm works even when
only an upper bound of the number of parties is given.
In such a case, no classical algorithm can solve the
problem even under the zero-error setting, the setting
in which error is not allowed but running time may be
unbounded.",
acknowledgement = ack-nhfb,
articleno =    "1",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Cheraghchi:2012:ALT,
author =       "Mahdi Cheraghchi and Johan H{\aa}stad and Marcus
Isaksson and Ola Svensson",
title =        "Approximating Linear Threshold Predicates",
journal =      j-TOCT,
volume =       "4",
number =       "1",
pages =        "2:1--2:??",
month =        mar,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2141938.2141940",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:48 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We study constraint satisfaction problems on the
domain {-1, 1}, where the given constraints are
homogeneous linear threshold predicates, that is,
predicates of the form $\sgn(w_1 x_1 + \cdots + w_n x_n)$ for some positive integer weights $w_1, \ldots{}, w_n$. Despite their simplicity, current techniques fall
short of providing a classification of these predicates
in terms of approximability. In fact, it is not easy to
guess whether there exists a homogeneous linear
threshold predicate that is approximation resistant or
the approximation curve of a class of threshold
predicates that allow for nontrivial
approximation. Arguably the simplest such predicate is
the majority predicate $\sgn(x_1 + \cdots + x_n)$, for
which we obtain an almost complete understanding of the
asymptotic approximation curve, assuming the Unique
Games Conjecture. Our techniques extend to a more
general class of majority-like'' predicates and we
obtain parallel results for them. In order to classify
these predicates, we introduce the notion of
Chow-robustness that might be of independent
interest.",
acknowledgement = ack-nhfb,
articleno =    "2",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{De:2012:ELB,
author =       "Anindya De and Thomas Watson",
title =        "Extractors and Lower Bounds for Locally Samplable
Sources",
journal =      j-TOCT,
volume =       "4",
number =       "1",
pages =        "3:1--3:??",
month =        mar,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2141938.2141941",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:48 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We consider the problem of extracting randomness from
sources that are efficiently samplable, in the sense
that each output bit of the sampler only depends on
some small number $d$ of the random input bits. As our
main result, we construct a deterministic extractor
that, given any $d$-local source with min-entropy $k$
on $n$ bits, extracts $\Omega(k^2 / n d)$ bits that are
$2^{-n \Omega (1)}$-close to uniform, provided $d \leq o(\log n)$ and $k \geq n^{2/3 + \gamma}$ (for
arbitrarily small constants $\gamma > 0$). Using our
result, we also improve a result of Viola [2010] who
proved a $1/2 O(1/\log n)$ statistical distance lower
bound for $o(\log n)$-local samplers trying to sample
input-output pairs of an explicit boolean function,
assuming the samplers use at most $n + n^{1 - \delta}$
random bits for some constant $\delta > 0$. Using a
different function, we simultaneously improve the lower
bound to $1/2 - 2^{-n \Omega (1)}$ and eliminate the
restriction on the number of random bits.",
acknowledgement = ack-nhfb,
articleno =    "3",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Schoenebeck:2012:CCN,
author =       "Grant R. Schoenebeck and Salil Vadhan",
title =        "The Computational Complexity of {Nash} Equilibria in
Concisely Represented Games",
journal =      j-TOCT,
volume =       "4",
number =       "2",
pages =        "4:1--4:??",
month =        may,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2189778.2189779",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:49 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "Games may be represented in many different ways, and
different representations of games affect the
complexity of problems associated with games, such as
finding a Nash equilibrium. The traditional method of
representing a game is to explicitly list all the
payoffs, but this incurs an exponential blowup as the
number of agents grows. We study two models of
concisely represented games: circuit games, where the
payoffs are computed by a given boolean circuit, and
graph games, where each agent's payoff is a function of
only the strategies played by its neighbors in a given
graph. For these two models, we study the complexity of
four questions: determining if a given strategy is a
Nash equilibrium, finding a Nash equilibrium,
determining if there exists a pure Nash equilibrium,
and determining if there exists a Nash equilibrium in
which the payoffs to a player meet some given
guarantees. In many cases, we obtain tight results,
showing that the problems are complete for various
complexity classes.",
acknowledgement = ack-nhfb,
articleno =    "4",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Kawamura:2012:CTO,
author =       "Akitoshi Kawamura and Stephen Cook",
title =        "Complexity Theory for Operators in Analysis",
journal =      j-TOCT,
volume =       "4",
number =       "2",
pages =        "5:1--5:??",
month =        may,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2189778.2189780",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:49 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We propose an extension of the framework for
discussing the computational complexity of problems
involving uncountably many objects, such as real
numbers, sets and functions, that can be represented
only through approximation. The key idea is to use a
certain class of string functions as names representing
these objects. These are more expressive than infinite
sequences, which served as names in prior work that
formulated complexity in more restricted settings. An
advantage of using string functions is that we can
define their size in a way inspired by higher-type
complexity theory. This enables us to talk about
computation on string functions whose time or space is
bounded polynomially in the input size, giving rise to
more general analogues of the classes P, NP, and
PSPACE. We also define NP- and PSPACE-completeness
under suitable many-one reductions. Because our
framework separates machine computation and semantics,
it can be applied to problems on sets of interest in
analysis once we specify a suitable representation
(encoding). As prototype applications, we consider the
complexity of functions (operators) on real numbers,
real sets, and real functions. For example, the task of
numerical algorithms for solving a certain class of
differential equations is naturally viewed as an
operator taking real functions to real functions. As
there was no complexity theory for operators, previous
results only stated how complex the solution can be. We
now reformulate them and show that the operator itself
is polynomial-space complete.",
acknowledgement = ack-nhfb,
articleno =    "5",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Penna:2012:CRM,
author =       "Paolo Penna and Carmine Ventre",
title =        "Collusion-Resistant Mechanisms with Verification
Yielding Optimal Solutions",
journal =      j-TOCT,
volume =       "4",
number =       "2",
pages =        "6:1--6:??",
month =        may,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2189778.2189781",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:49 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "A truthful mechanism consists of an algorithm
augmented with a suitable payment function that
guarantees that the players cannot improve their
utilities by cheating. Mechanism design approaches are
particularly appealing for designing protocols that
cannot be manipulated by rational players. We present
new constructions of so-called mechanisms with
verification introduced by Nisan and Ronen [2001]. We
first show how to obtain mechanisms that, for
single-parameter domains, are resistant to coalitions
of colluding agents even if they can exchange
compensations. Based on this result we derive a class
of exact truthful mechanisms with verification for
arbitrary bounded domains. This class of problems
includes most of the problems studied in the
algorithmic mechanism design literature and for which
exact solutions cannot be obtained with truthful
mechanisms without verification. This result is an
improvement over all known previous constructions of
exact mechanisms with verification.",
acknowledgement = ack-nhfb,
articleno =    "6",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Beyersdorff:2012:PBD,
author =       "Olaf Beyersdorff and Nicola Galesi and Massimo Lauria
and Alexander A. Razborov",
title =        "Parameterized Bounded-Depth {Frege} Is not Optimal",
journal =      j-TOCT,
volume =       "4",
number =       "3",
pages =        "7:1--7:??",
month =        sep,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2355580.2355582",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:50 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "A general framework for parameterized proof complexity
was introduced by Dantchev et al. [2007]. There, the
authors show important results on tree-like
Parameterized Resolution---a parameterized version of
classical Resolution---and their gap complexity theorem
implies lower bounds for that system. The main result
showing optimal lower bounds for a parameterized
version of bounded-depth Frege. More precisely, we
prove that the pigeonhole principle requires proofs of
size n$^{\Omega (k)}$ in parameterized bounded-depth
Frege, and, as a special case, in dag-like
Parameterized Resolution. This answers an open question
posed in Dantchev et al. [2007]. In the opposite
direction, we interpret a well-known technique for FPT
algorithms as a DPLL procedure for Parameterized
Resolution. Its generalization leads to a proof search
algorithm for Parameterized Resolution that in
particular shows that tree-like Parameterized
Resolution allows short refutations of all
CNFs.",
acknowledgement = ack-nhfb,
articleno =    "7",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Watson:2012:RWW,
author =       "Thomas Watson",
title =        "Relativized Worlds without Worst-Case to Average-Case
Reductions for {NP}",
journal =      j-TOCT,
volume =       "4",
number =       "3",
pages =        "8:1--8:??",
month =        sep,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2355580.2355583",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Nov 6 18:23:50 MST 2012",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We prove that, relative to an oracle, there is no
worst-case to average-case reduction for NP. We also
handle classes that are somewhat larger than NP, as
well as worst-case to errorless -average-case
reductions. In fact, we prove that relative to an
oracle, there is no worst-case to
errorless-average-case reduction from NP to
BPP$_{||}^{NP}$. We also handle reductions from NP to
the polynomial-time hierarchy and beyond, under strong
restrictions on the number of queries the reductions
can make.",
acknowledgement = ack-nhfb,
articleno =    "8",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Kayal:2012:SSR,
author =       "Neeraj Kayal and Chandan Saha",
title =        "On the Sum of Square Roots of Polynomials and Related
Problems",
journal =      j-TOCT,
volume =       "4",
number =       "4",
pages =        "9:1--9:??",
month =        nov,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2382559.2382560",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Sun May 5 09:31:28 MDT 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "The sum of square roots over integers problem is the
task of deciding the sign of a nonzero sum, {$S = \Sigma_{i = 1}^n \delta_i \cdot \sqrt a_i$}, where
\delta $_i$ \in {+1, -1} and a$_i$ 's are positive
integers that are upper bounded by {$N$} (say). A
fundamental open question in numerical analysis and
computational geometry is whether {$| S | \geq 1 / 2^{(n \cdot \log N) O(1)}$} when {$S \neq 0$}. We
study a formulation of this problem over polynomials.
Given an expression {$S = \Sigma_{i = 1}^n c_i \cdot \sqrt f_i (x)$}, where $c_i$'s belong to a field of
characteristic $0$ and $f_i$'s are univariate
polynomials with degree bounded by $d$ and $f_i(0) \neq 0$ for all $i$, is it true that the minimum
exponent of $x$ that has a nonzero coefficient in the
power series {$S$} is upper bounded by {$(n \cdot d)^{O(1)}$}, unless {$S = 0$}? We answer this
question affirmatively. Further, we show that this
result over polynomials can be used to settle
(positively) the sum of square roots problem for a
special class of integers: Suppose each integer $a_i$
is of the form, {$a_i = X^d_i + b_{i1} X^{di - 1} + \ldots {} + b_{idi}$}, $d_i > 0$, where {$X$} is a
positive real number and $b_{ij}$'s are integers. Let
{$B = \max (| b_{ij} |_{i, j}, 1)$} and $d = \max_i \{ d_i \}$. If {$X > (B + 1)^{(n \cdot d)O(1)}$}
then a nonzero {$S = \Sigma_{i = 1}^n \delta_i \sqrt a_i$} is lower bounded as {$| S | \geq 1 / X^(n \cdot d)O(1)$}. The constant in {$O (1)$}, as fixed by our
analysis, is roughly $2$. We then consider the
following more general problem. Given an arithmetic
circuit computing a multivariate polynomial {$f (X)$}
and integer $d$, is the degree of {$f (X)$} less than
or equal to $d$ ? We give a {coRP$^{PP}$}-algorithm for
this problem, improving previous results of Allender et
al. [2009] and Koiran and Perifel [2007].",
acknowledgement = ack-nhfb,
articleno =    "9",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Pass:2012:PRT,
author =       "Rafael Pass and Muthuramakrishnan Venkitasubramaniam",
title =        "A Parallel Repetition Theorem for Constant-Round
{Arthur--Merlin} Proofs",
journal =      j-TOCT,
volume =       "4",
number =       "4",
pages =        "10:1--10:??",
month =        nov,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2382559.2382561",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Sun May 5 09:31:28 MDT 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We show a parallel-repetition theorem for
constant-round Arthur--Merlin Proofs, using an
efficient reduction. As a consequence, we show that
parallel repetition reduces the soundness-error at an
optimal rate (up to a negligible factor) in
constant-round public-coin argument systems, and
constant-round public-coin proofs of knowledge. The
first of these results resolves an open question posed
by Bellare, Impagliazzo, and Naor (FOCS'97).",
acknowledgement = ack-nhfb,
articleno =    "10",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Ron:2012:AIM,
author =       "Dana Ron and Ronitt Rubinfeld and Muli Safra and Alex
Samorodnitsky and Omri Weinstein",
title =        "Approximating the Influence of Monotone {Boolean}
Functions in {$O(\sqrt n)$} Query Complexity",
journal =      j-TOCT,
volume =       "4",
number =       "4",
pages =        "11:1--11:??",
month =        nov,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2382559.2382562",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Sun May 5 09:31:28 MDT 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "The Total Influence ( Average Sensitivity ) of a
discrete function is one of its fundamental measures.
We study the problem of approximating the total
influence of a monotone Boolean function, which we
denote by {$I[f]$}. We present a randomized algorithm
that approximates the influence of such functions to
within a multiplicative factor of $(1 \pm \epsilon)$
by performing {$O(\sqrt n I[f] \poly (1 / \epsilon))$} queries. We also prove a lower bound of {$\Omega (\sqrt n \log n \cdot I[f])$} on the query complexity
of any constant factor approximation algorithm for this
problem (which holds for {$I[f] = \Omega (1)$}),
hence showing that our algorithm is almost optimal in
terms of its dependence on $n$. For general functions,
we give a lower bound of {$\Omega (n I[f])$}, which
matches the complexity of a simple sampling
algorithm.",
acknowledgement = ack-nhfb,
articleno =    "11",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Vlassis:2012:CCS,
author =       "Nikos Vlassis and Michael L. Littman and David
Barber",
title =        "On the Computational Complexity of Stochastic
Controller Optimization in {POMDPs}",
journal =      j-TOCT,
volume =       "4",
number =       "4",
pages =        "12:1--12:??",
month =        nov,
year =         "2012",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2382559.2382563",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Sun May 5 09:31:28 MDT 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We show that the problem of finding an optimal
stochastic blind controller in a Markov decision
process is an NP-hard problem. The corresponding
decision problem is NP-hard in PSPACE and
sqrt-sum-hard, hence placing it in NP would imply
breakthroughs in long-standing open problems in
computer science. Our result establishes that the more
general problem of stochastic controller optimization
in POMDPs is also NP-hard. Nonetheless, we outline a
special case that is convex and admits efficient global
solutions.",
acknowledgement = ack-nhfb,
articleno =    "12",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Austrin:2013:UP,
author =       "Per Austrin and Johan H{\aa}stad",
title =        "On the usefulness of predicates",
journal =      j-TOCT,
volume =       "5",
number =       "1",
pages =        "1:1--1:??",
month =        may,
year =         "2013",
CODEN =        "????",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:04 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "Motivated by the pervasiveness of strong
inapproximability results for Max-CSPs, we introduce a
relaxed notion of an approximate solution of a Max-CSP.
In this relaxed version, loosely speaking, the
algorithm is allowed to replace the constraints of an
instance by some other (possibly real-valued)
constraints, and then only needs to satisfy as many of
the new constraints as possible. To be more precise, we
introduce the following notion of a predicate $P$ being
useful for a (real-valued) objective $Q$: given an
almost satisfiable Max- $P$ instance, there is an
algorithm that beats a random assignment on the
corresponding Max-$Q$ instance applied to the same sets
of literals. The standard notion of a nontrivial
approximation algorithm for a Max-CSP with predicate
$P$ is exactly the same as saying that $P$ is useful
for $P$ itself. We say that $P$ is useless if it is not
useful for any $Q$. This turns out to be equivalent to
the following pseudo-randomness property: given an
almost satisfiable instance of Max- $P$, it is hard to
find an assignment such that the induced distribution
on k -bit strings defined by the instance is not
essentially uniform. Under the unique games conjecture,
we give a complete and simple characterization of
useful Max-CSPs defined by a predicate: such a Max-CSP
is useless if and only if there is a pairwise
independent distribution supported on the satisfying
assignments of the predicate. It is natural to also
consider the case when no negations are allowed in the
CSP instance, and we derive a similar complete
characterization (under the UGC) there as well.
Finally, we also include some results and examples
shedding additional light on the approximability of
certain Max-CSPs.",
acknowledgement = ack-nhfb,
articleno =    "1",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Beyersdorff:2013:VPC,
author =       "Olaf Beyersdorff and Samir Datta and Andreas Krebs and
Meena Mahajan and Gido Scharfenberger-Fabian and
Karteek Sreenivasaiah and Michael Thomas and Heribert
Vollmer",
title =        "Verifying proofs in constant depth",
journal =      j-TOCT,
volume =       "5",
number =       "1",
pages =        "2:1--2:??",
month =        may,
year =         "2013",
CODEN =        "????",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:04 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "In this paper we initiate the study of proof systems
where verification of proofs proceeds by NC$^0$
circuits. We investigate the question which languages
admit proof systems in this very restricted model.
Formulated alternatively, we ask which languages can be
enumerated by NC$^0$ functions. Our results show that
the answer to this problem is not determined by the
complexity of the language. On the one hand, we
construct NC$^0$ proof systems for a variety of
languages ranging from regular to NP complete. On the
other hand, we show by combinatorial methods that even
easy regular languages such as Exact-OR do not admit
NC$^0$ proof systems. We also show that Majority does
not admit NC$^0$ proof systems. Finally, we present a
general construction of NC$^0$ proof systems for
regular languages with strongly connected NFA's.",
acknowledgement = ack-nhfb,
articleno =    "2",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Cygan:2013:MCP,
author =       "Marek Cygan and Marcin Pilipczuk and Michal Pilipczuk
and Jakub Onufry Wojtaszczyk",
title =        "On multiway cut parameterized above lower bounds",
journal =      j-TOCT,
volume =       "5",
number =       "1",
pages =        "3:1--3:??",
month =        may,
year =         "2013",
CODEN =        "????",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:04 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We introduce a concept of parameterizing a problem
above the optimum solution of its natural linear
programming relaxation and prove that the node multiway
cut problem is fixed-parameter tractable (FPT) in this
setting. As a consequence we prove that node multiway
cut is FPT, when parameterized above the maximum
separating cut, resolving an open problem of Razgon.
Our results imply $O^*(4^k)$ algorithms for vertex
cover above maximum matching and almost 2-SAT as well
as an $O^*(2^k)$ algorithm for node multiway cut with
a standard parameterization by the solution size,
improving previous bounds for these problems.",
acknowledgement = ack-nhfb,
articleno =    "3",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Englert:2013:EC,
author =       "Matthias Englert and Heiko R{\"o}glin and Jacob
Sp{\"o}nemann and Berthold V{\"o}cking",
title =        "Economical Caching",
journal =      j-TOCT,
volume =       "5",
number =       "2",
pages =        "4:1--4:??",
month =        jul,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2493246.2493247",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:08 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We study the management of buffers and storages in
environments with unpredictably varying prices in a
competitive analysis. In the economical caching
problem, there is a storage with a certain capacity.
For each time step, an online algorithm is given a
price from the interval $[1, \alpha]$, a consumption,
and possibly a buying limit. The online algorithm has
to decide the amount to purchase from some commodity,
knowing the parameter $\alpha$ but without knowing
how the price evolves in the future. The algorithm can
purchase at most the buying limit. If it purchases more
than the current consumption, then the excess is stored
in the storage; otherwise, the gap between consumption
and purchase must be taken from the storage. The goal
is to minimize the total cost. Interesting motivating
applications are, for example, stream caching on mobile
devices with different classes of service, battery
management in micro hybrid cars, and the efficient
purchase of resources. First we consider the simple but
natural class of algorithms that can informally be
described as memoryless. We show that these algorithms
cannot achieve a competitive ratio below $\sqrt \alpha$. Then we present a more sophisticated deterministic
algorithm achieving a competitive ratio of where $W$
denotes the Lambert $W$ function. We prove that this
algorithm is optimal and that not even randomized
online algorithms can achieve a better competitive
ratio. On the other hand, we show how to achieve a
constant competitive ratio if the storage capacity of
the online algorithm exceeds the storage capacity of an
optimal offline algorithm by a factor of $\log \alpha$.",
acknowledgement = ack-nhfb,
articleno =    "4",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Bogdanov:2013:HFL,
author =       "Andrej Bogdanov and Akinori Kawachi and Hidetoki
Tanaka",
title =        "Hard Functions for Low-Degree Polynomials over Prime
Fields",
journal =      j-TOCT,
volume =       "5",
number =       "2",
pages =        "5:1--5:??",
month =        jul,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2493246.2493248",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:08 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
amplification for low-degree polynomials over prime
fields, namely, we prove that if some function is
mildly hard to approximate by any low-degree
polynomials then the sum of independent copies of the
function is very hard to approximate by them. This
result generalizes the XOR lemma for low-degree
polynomials over the binary field given by Viola and
Wigderson [2008]. The main technical contribution is
the analysis of the Gowers norm over prime fields. For
the analysis, we discuss a generalized low-degree test,
which we call the Gowers test, for polynomials over
prime fields, which is a natural generalization of that
over the binary field given by Alon et al. [2003]. This
Gowers test provides a new technique to analyze the
Gowers norm over prime fields. Actually, the rejection
probability of the Gowers test can be analyzed in the
framework of Kaufman and Sudan [2008]. However, our
analysis is self-contained and quantitatively better.
By using our argument, we also prove the hardness of
modulo functions for low-degree polynomials over prime
fields.",
acknowledgement = ack-nhfb,
articleno =    "5",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Williams:2013:ATP,
author =       "Ryan Williams",
title =        "Alternation-Trading Proofs, Linear Programming, and
Lower Bounds",
journal =      j-TOCT,
volume =       "5",
number =       "2",
pages =        "6:1--6:??",
month =        jul,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2493246.2493249",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:08 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "A fertile area of recent research has demonstrated
concrete polynomial-time lower bounds for natural hard
problems on restricted computational models. Among
these problems are Satisfiability, Vertex Cover,
Hamilton Path, MOD$_6$-SAT, Majority-of-Majority-SAT,
and Tautologies, to name a few. The proofs of these
An important open problem is to determine how powerful
such proofs can possibly be. We propose a methodology
for studying these proofs that makes them amenable to
both formal analysis and automated theorem proving. We
prove that the search for better lower bounds can often
be turned into a problem of solving a large series of
linear programming instances. Implementing a
small-scale theorem prover based on these results, we
extract new human-readable time lower bounds for
several problems and identify patterns that allow for
further generalization. The framework can also be used
to prove concrete limitations on the current
techniques.",
acknowledgement = ack-nhfb,
articleno =    "6",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Ron:2013:ANR,
author =       "Dana Ron and Gilad Tsur",
title =        "On Approximating the Number of Relevant Variables in a
Function",
journal =      j-TOCT,
volume =       "5",
number =       "2",
pages =        "7:1--7:??",
month =        jul,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2493246.2493250",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:08 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "In this work we consider the problem of approximating
the number of relevant variables in a function given
multiplicative factor approximation is hard in general,
we consider several relaxations of the problem. In
particular, we consider a relaxation of the property
testing variant of the problem and we consider
relaxations in which we have a promise that the
function belongs to a certain family of functions
(e.g., linear functions). In the former relaxation the
task is to distinguish between the case that the number
of relevant variables is at most $k$, and the case in
which it is far from any function in which the number
of relevant variables is more than $(1 + \gamma) k$
for a parameter $\gamma$. We give both upper bounds
and almost matching lower bounds for the relaxations we
study.",
acknowledgement = ack-nhfb,
articleno =    "7",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Allender:2013:ISI,
author =       "Eric Allender and Shafi Goldwasser",
title =        "Introduction to the special issue on innovations in
theoretical computer science 2012",
journal =      j-TOCT,
volume =       "5",
number =       "3",
pages =        "8:1--8:??",
month =        aug,
year =         "2013",
DOI =          "http://dx.doi.org/10.1145/2493252.2493253",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:12 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
note =         "Special issue on innovations in theoretical computer
science 2012.",
acknowledgement = ack-nhfb,
articleno =    "8",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Pagh:2013:CMM,
author =       "Rasmus Pagh",
title =        "Compressed matrix multiplication",
journal =      j-TOCT,
volume =       "5",
number =       "3",
pages =        "9:1--9:??",
month =        aug,
year =         "2013",
DOI =          "http://dx.doi.org/10.1145/2493252.2493254",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:12 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
note =         "Special issue on innovations in theoretical computer
science 2012.",
abstract =     "We present a simple algorithm that approximates the
product of $n$-by-$n$ real matrices $A$ and $B$. Let $|| A B ||_F$ denote the Frobenius norm of $A B$, and
$b$ be a parameter determining the time\slash accuracy
trade-off. Given $2$-wise independent hash functions $h_1, h_2 : [n] \to [b]$, and $s_1, s_2 : [n] \to \{ - 1, + 1 \}$ the algorithm works by first
compressing'' the matrix product into the polynomial
$p (x) = \Sigma_{k = 1}^n \left (\Sigma_{i = 1}^n A_{ik} s_1 (i) x^{h 1 (i)} \right) \left (\Sigma_{j = 1}^n B_{kj} s_2 (j) x^{h 2 (j)} \right)$. Using the
fast Fourier transform to compute polynomial
multiplication, we can compute $c_0, \ldots {}, c_{b - 1}$ such that $\Sigma_i c_i x^i = (p (x) \bmod x^b) + (p (x) \div x^b)$ in time ${\~ O}(n^2 + n b)$. An
unbiased estimator of $(A B)_{ij}$ with variance at
most $|| A B ||_F^2 / b$ can then be computed as: $C_{ij} = s_1 (i) s_2 (j) c_{(h_1 (i) + h_2 (j)) \bmod b}$. Our approach also leads to an algorithm for
computing AB exactly, with high probability, in time ${\~ O}(N + n b)$ in the case where $A$ and $B$ have at
most $N$ nonzero entries, and $A B$ has at most $b$
nonzero entries.",
acknowledgement = ack-nhfb,
articleno =    "9",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Viderman:2013:LTD,
author =       "Michael Viderman",
title =        "Linear-time decoding of regular expander codes",
journal =      j-TOCT,
volume =       "5",
number =       "3",
pages =        "10:1--10:??",
month =        aug,
year =         "2013",
DOI =          "http://dx.doi.org/10.1145/2493252.2493255",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:12 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
note =         "Special issue on innovations in theoretical computer
science 2012.",
abstract =     "Sipser and Spielman (IEEE IT, [1996]) showed that any
$(c, d)$-regular expander code with expansion
parameter $> 3 / 4$ is decodable in linear time from
a constant fraction of errors. Feldman et al. (IEEE IT,
[2007]) proved that expansion parameter $> 2 / 3 + (1 / 3) c$ is sufficient to correct a constant fraction
of errors in polynomial time using LP decoding. In this
work, we give a simple combinatorial algorithm that
achieves even better parameters. In particular, our
algorithm runs in linear time and works for any
expansion parameter $> 2 / 3 - (1 / 6) c$. We also
prove that our decoding algorithm can be executed in
logarithmic time on a linear number of parallel
processors.",
acknowledgement = ack-nhfb,
articleno =    "10",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Ozols:2013:QRS,
author =       "Maris Ozols and Martin Roetteler and J{\'e}r{\'e}mie
Roland",
title =        "Quantum rejection sampling",
journal =      j-TOCT,
volume =       "5",
number =       "3",
pages =        "11:1--11:??",
month =        aug,
year =         "2013",
DOI =          "http://dx.doi.org/10.1145/2493252.2493256",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:12 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
note =         "Special issue on innovations in theoretical computer
science 2012.",
abstract =     "Rejection sampling is a well-known method to sample
from a target distribution, given the ability to sample
from a given distribution. The method has been first
formalized by von Neumann [1951] and has many
applications in classical computing. We define a
quantum analogue of rejection sampling: given a black
box producing a coherent superposition of (possibly
unknown) quantum states with some amplitudes, the
problem is to prepare a coherent superposition of the
same states, albeit with different target amplitudes.
characterization of the query complexity of this
quantum state generation problem. We exhibit an
algorithm, which we call quantum rejection sampling,
and analyze its cost using semidefinite programming.
Our proof of a matching lower bound is based on the
automorphism principle that allows to symmetrize any
algorithm over the automorphism group of the problem.
Our main technical innovation is an extension of the
automorphism principle to continuous groups that arise
for quantum state generation problems where the oracle
encodes unknown quantum states, instead of just
classical data. Furthermore, we illustrate how quantum
rejection sampling may be used as a primitive in
designing quantum algorithms, by providing three
different applications. We first show that it was
implicitly used in the quantum algorithm for linear
systems of equations by Harrow et al. [2009]. Second we
show that it can be used to speed up the main step in
the quantum Metropolis sampling algorithm by Temme et
al. [2011]. Finally, we derive a new quantum algorithm
for the hidden shift problem of an arbitrary Boolean
function and relate its query complexity to
water-filling'' of the Fourier spectrum.",
acknowledgement = ack-nhfb,
articleno =    "11",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Drucker:2013:HCP,
author =       "Andrew Drucker",
title =        "High-confidence predictions under adversarial
uncertainty",
journal =      j-TOCT,
volume =       "5",
number =       "3",
pages =        "12:1--12:??",
month =        aug,
year =         "2013",
DOI =          "http://dx.doi.org/10.1145/2493252.2493257",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:12 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
note =         "Special issue on innovations in theoretical computer
science 2012.",
abstract =     "We study the setting in which the bits of an unknown
infinite binary sequence $x$ are revealed sequentially
to an observer. We show that very limited assumptions
about $x$ allow one to make successful predictions
about unseen bits of $x$. First, we study the problem
of successfully predicting a single 0 from among the
bits of $x$. In our model, we have only one chance to
make a prediction, but may do so at a time of our
choosing. This model is applicable to a variety of
situations in which we want to perform an action of
fixed duration, and need to predict a safe''
time-interval to perform it. Letting $N_t$ denote the
number of $1$'s among the first $t$ bits of $x$, we say
that $x$ is $\epsilon$-weakly sparse'' if $\lim \inf (N_t / t) < = \epsilon$. Our main result is a
randomized algorithm that, given any $\epsilon$-weakly sparse sequence $x$, predicts a $0$ of $x$
with success probability as close as desired to $1 - \epsilon$. Thus, we can perform this task with
essentially the same success probability as under the
much stronger assumption that each bit of $x$ takes the
value $1$ independently with probability $\epsilon$.
We apply this result to show how to successfully
predict a bit ($0$ or $1$ ) under a broad class of
possible assumptions on the sequence $x$. The
assumptions are stated in terms of the behavior of a
finite automaton $M$ reading the bits of $x$. We also
propose and solve a variant of the well-studied
ignorant forecasting'' problem. For every $\epsilon > 0$, we give a randomized forecasting algorithm $S_\epsilon$ that, given sequential access to a binary
sequence $x$, makes a prediction of the form: A $p$
fraction of the next $N$ bits will be $1$'s.'' (The
algorithm gets to choose $p$, $N$, and the time of the
prediction.) For any fixed sequence $x$, the forecast
fraction $p$ is accurate to within $\pm {} \epsilon$
with probability $1 - \epsilon$.",
acknowledgement = ack-nhfb,
articleno =    "12",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

Wagner",
title =        "Graph Isomorphism is Not {AC$^0$}-Reducible to Group
Isomorphism",
journal =      j-TOCT,
volume =       "5",
number =       "4",
pages =        "13:1--13:??",
month =        nov,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2540088",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:15 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We give a new upper bound for the Group and Quasigroup
Isomorphism problems when the input structures are
given explicitly by multiplication tables. We show that
these problems can be computed by polynomial size
nondeterministic circuits of unbounded fan-in with $O(\log \log n)$ depth and $O(\log^2 n)$
nondeterministic bits, where n is the number of group
elements. This improves the existing upper bound for
the problems. In the previous bound the circuits have
bounded fan-in but depth $O(\log^2 n)$ and also $O(\log^2 n)$ nondeterministic bits. We then prove that
the kind of circuits from our upper bound cannot
compute the Parity function. Since Parity is
AC$^0$-reducible to Graph Isomorphism, this implies
that Graph Isomorphism is strictly harder than Group or
Quasigroup Isomorphism under the ordering defined by
AC$^0$ reductions. We extend this result to the
stronger ACC$^0 [p]$ reduction and its randomized
version.",
acknowledgement = ack-nhfb,
articleno =    "13",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{De:2013:EOH,
author =       "Anindya De and Elchanan Mossel",
title =        "Explicit Optimal Hardness via {Gaussian} Stability
Results",
journal =      j-TOCT,
volume =       "5",
number =       "4",
pages =        "14:1--14:??",
month =        nov,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2505766",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:15 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "The results of Raghavendra [2008] show that assuming
Khot's Unique Games Conjecture [2002], for every
constraint satisfaction problem there exists a generic
semidefinite program that achieves the optimal
approximation factor. This result is existential as it
does not provide an explicit optimal rounding procedure
nor does it allow to calculate exactly the Unique Games
hardness of the problem. Obtaining an explicit optimal
approximation scheme and the corresponding
approximation factor is a difficult challenge for each
specific approximation problem. Khot et al. [2004]
established a general approach for determining the
exact approximation factor and the corresponding
optimal rounding algorithm for any given constraint
satisfaction problem. However, this approach crucially
relies on results explicitly proving optimal partitions
in the Gaussian space. Until recently, Borell's result
[1985] was the only nontrivial Gaussian partition
explicit optimal approximation algorithm and the
corresponding approximation factor using a new result
on Gaussian partitions due to Isaksson and Mossel
[2012]. This Gaussian result allows us to determine the
exact Unique Games Hardness of MAX-$3$-EQUAL. In
particular, our results show that Zwick's algorithm for
this problem achieves the optimal approximation factor
and prove that the approximation achieved by the
algorithm is $\approx 0.796$ as conjectured by Zwick
[1998]. We further use the previously known optimal
Gaussian partitions results to obtain a new Unique
Games Hardness factor for MAX-$k$-CSP: Using the
well-known fact that jointly normal pairwise
independent random variables are fully independent, we
show that the UGC hardness of Max-$k$-CSP is $\lceil (k + 1) / 2 \rceil 2^{k - 1}$, improving on results of
Austrin and Mossel [2009].",
acknowledgement = ack-nhfb,
articleno =    "14",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Dalmau:2013:RSC,
author =       "V{\'\i}ctor Dalmau and Andrei Krokhin",
title =        "Robust Satisfiability for {CSPs}: Hardness and
Algorithmic Results",
journal =      j-TOCT,
volume =       "5",
number =       "4",
pages =        "15:1--15:??",
month =        nov,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2540090",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:15 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "An algorithm for a constraint satisfaction problem is
called robust if it outputs an assignment satisfying at
least a $(1 - f (\epsilon))$-fraction of constraints
for each $(1 - \epsilon)$-satisfiable instance (i.e.,
such that at most a \epsilon -fraction of constraints
needs to be removed to make the instance satisfiable),
where $f(\epsilon) \to 0$ as $\epsilon \to 0$. We
establish an algebraic framework for analyzing
constraint satisfaction problems admitting an efficient
robust algorithm with functions $f$ of a given growth
rate. We use this framework to derive hardness results.
We also describe three classes of problems admitting an
efficient robust algorithm such that $f$ is $O (1 / \log (1 / \epsilon))$, $O(\epsilon^{1 / k})$ for
some $k > 1$, and $O(\epsilon)$, respectively.
Finally, we give a complete classification of robust
satisfiability with a given $f$ for the Boolean case.",
acknowledgement = ack-nhfb,
articleno =    "15",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Fellows:2013:DFP,
author =       "Michael Fellows and Fedor V. Fomin and Daniel
Lokshtanov and Elena Losievskaja and Frances Rosamond
and Saket Saurabh",
title =        "Distortion is Fixed Parameter Tractable",
journal =      j-TOCT,
volume =       "5",
number =       "4",
pages =        "16:1--16:??",
month =        nov,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2489789",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:15 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We study low-distortion embedding of metric spaces
into the line, and more generally, into the shortest
path metric of trees, from the parameterized complexity
perspective. Let $M = M (G)$ be the shortest path
metric of an edge-weighted graph $G$, with the vertex
set $V (G)$ and the edge set $E (G)$, on $n$
vertices. We give the first fixed parameter tractable
algorithm that for an unweighted graph metric $M$ and
integer $d$ either constructs an embedding of $M$ into
the line with distortion at most $d$, or concludes that
no such embedding exists. Our algorithm requires O(
nd$^4$ (2 d + 1)$^{2d}$ ) time which is a significant
improvement over the best previous algorithm that runs
in time $O(n^{4d + 2} d^{O(1)})$. Because of its
apparent similarity to the notoriously hard Bandwidth
Minimization problem, we find it surprising that this
problem turns out to be fixed parameter tractable. We
extend our results on embedding unweighted graph metric
into the line in two ways. First, we give an algorithm
to construct small-distortion embeddings of weighted
graph metrics. The running time of our algorithm is $O(n (d W)^4 (2 d + 1)^{2dW})$, where $W$ is the
largest edge weight of the input graph. To complement
this result, we show that the exponential dependence on
the maximum edge weight is unavoidable. In particular,
we show that deciding whether a weighted graph metric $M (G)$ with maximum weight $W < | V (G)|$ can be
embedded into the line with distortion at most $d$ is
NP-complete for every fixed rational $d \geq 2$. This
rules out any possibility of an algorithm with running
time $O((n W)^{h(d)})$ where $h$ is a function of $d$
alone. Second, we consider more general host metrics
for which analogous results hold. In particular, we
prove that for any tree $T$ with maximum degree \Delta
, embedding $M$ into a shortest path metric of $T$ is
fixed parameter tractable, parameterized by $(\Delta, d)$.",
acknowledgement = ack-nhfb,
articleno =    "16",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Razborov:2013:RA,
author =       "Alexander Razborov and Emanuele Viola",
journal =      j-TOCT,
volume =       "5",
number =       "4",
pages =        "17:1--17:??",
month =        nov,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2540089",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:15 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We highlight the challenge of proving correlation
bounds between boolean functions and real-valued
polynomials, where any non-boolean output counts
against correlation. We prove that real-valued
polynomials of degree $1 2 \lg_2 \lg_2 n$ have
correlation with parity at most zero. Such a result is
false for modular and threshold polynomials. Its proof
is based on a variant of an anti-concentration result
by Costello et al. [2006].",
acknowledgement = ack-nhfb,
articleno =    "17",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Harkins:2013:ELA,
author =       "Ryan C. Harkins and John M. Hitchcock",
title =        "Exact Learning Algorithms, Betting Games, and Circuit
Lower Bounds",
journal =      j-TOCT,
volume =       "5",
number =       "4",
pages =        "18:1--18:??",
month =        nov,
year =         "2013",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2539126.2539130",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Dec 12 17:32:15 MST 2013",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
and Klivans [2009], who showed that if a circuit class
$C$ has an efficient learning algorithm in Angluin's
model of exact learning via equivalence and membership
queries [Angluin 1988], then we have the lower bound
EXP$^{NP}$ not $C$. We use entirely different
techniques involving betting games [Buhrman et al.
2001] to remove the NP oracle and improve the lower
bound to EXP not $C$. This shows that it is even more
difficult to design a learning algorithm for $C$ than
the results of Fortnow and Klivans [2009] indicated. We
also investigate the connection between betting games
and natural proofs, and as a corollary the existence of
strong pseudorandom generators. Our results also yield
further evidence that the class of Boolean circuits has
no efficient exact learning algorithm. This is because
our separation is strong in that it yields a natural
proof [Razborov and Rudich 1997] against the class.
From this we conclude that an exact learning algorithm
for Boolean circuits would imply that strong
pseudorandom generators do not exist, which contradicts
widely believed conjectures from cryptography. As a
corollary we obtain that if strong pseudorandom
generators exist, then there is no exact learning
algorithm for Boolean circuits.",
acknowledgement = ack-nhfb,
articleno =    "18",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

and Lila Fontes and Michal Kouck{\'y} and Toniann
Pitassi",
title =        "The Hardness of Being Private",
journal =      j-TOCT,
volume =       "6",
number =       "1",
pages =        "1:1--1:??",
month =        mar,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2567671",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Apr 1 06:02:31 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "Kushilevitz [1989] initiated the study of
information-theoretic privacy within the context of
communication complexity. Unfortunately, it has been
shown that most interesting functions are not privately
computable [Kushilevitz 1989, Brandt and Sandholm
2008]. The unattainability of perfect privacy for many
functions motivated the study of approximate privacy.
Feigenbaum et al. [2010a, 2010b] define notions of
worst-case as well as average-case approximate privacy
and present several interesting upper bounds as well as
we obtain asymptotically tight bounds on the trade-offs
between both the worst-case and average-case
approximate privacy of protocols and their
communication cost for Vickrey auctions. Further, we
relate the notion of average-case approximate privacy
to other measures based on information cost of
protocols. This enables us to prove exponential lower
bounds on the subjective approximate privacy of
protocols for computing the Intersection function,
independent of its communication cost. This proves a
conjecture of Feigenbaum et al. [2010a].",
acknowledgement = ack-nhfb,
articleno =    "1",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Austrin:2014:NNH,
author =       "Per Austrin and Ryan O'Donnell and Li-Yang Tan and
John Wright",
title =        "New {NP}-Hardness Results for $3$-Coloring and
$2$-to-$1$ Label Cover",
journal =      j-TOCT,
volume =       "6",
number =       "1",
pages =        "2:1--2:??",
month =        mar,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2537800",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Apr 1 06:02:31 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We show that given a 3-colorable graph, it is NP-hard
to find a 3-coloring with $(16 / 17 + \epsilon)$ of
the edges bichromatic. In a related result, we show
that given a satisfiable instance of the 2-to-1 Label
Cover problem, it is NP-hard to find a $(23 / 24 + \epsilon)$-satisfying assignment.",
acknowledgement = ack-nhfb,
articleno =    "2",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Glasser:2014:UDN,
author =       "Christian Gla{\ss}er and John M. Hitchcock and A.
Pavan and Stephan Travers",
title =        "Unions of Disjoint {NP}-Complete Sets",
journal =      j-TOCT,
volume =       "6",
number =       "1",
pages =        "3:1--3:??",
month =        mar,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2591508",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Apr 1 06:02:31 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We study the following question: if A and B are
disjoint NP-complete sets, then is A \cup B
NP-complete? We provide necessary and sufficient
conditions under which the union of disjoint
NP-complete sets remain complete.",
acknowledgement = ack-nhfb,
articleno =    "3",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Sun:2014:ECN,
author =       "Shu-Ming Sun and Ning Zhong",
title =        "On Effective Convergence of Numerical Solutions for
Differential Equations",
journal =      j-TOCT,
volume =       "6",
number =       "1",
pages =        "4:1--4:??",
month =        mar,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2578219",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Apr 1 06:02:31 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
numerical solutions of initial value problems (IVPs)
for ordinary differential equations (ODEs). A
convergent sequence $\{ Y_m \}$ of numerical
solutions is said to be effectively convergent to the
exact solution if there is an algorithm that computes
an $N \in N$, given an arbitrary $n \in N$ as
input, such that the error between $Y_m$ and the
exact solution is less than $2^{-n}$ for all $m \geq N$. It is proved that there are convergent numerical
solutions generated from Euler's method which are not
effectively convergent. It is also shown that a
theoretically-proved-computable solution using Picard's
iteration method might not be computable by classical
numerical methods, which suggests that sometimes there
is a gap between theoretical computability and
practical numerical computations concerning solutions
of ODEs. Moreover, it is noted that the main theorem
(Theorem 4.1) provides an example of an IVP with a
nonuniform Lipschitz function for which the numerical
solutions generated by Euler's method are still
convergent.",
acknowledgement = ack-nhfb,
articleno =    "4",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{ODonnell:2014:OLB,
author =       "Ryan O'Donnell and Yi Wu and Yuan Zhou",
title =        "Optimal Lower Bounds for Locality-Sensitive Hashing
(Except When $q$ is Tiny)",
journal =      j-TOCT,
volume =       "6",
number =       "1",
pages =        "5:1--5:??",
month =        mar,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2578221",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Apr 1 06:02:31 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/hash.bib;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We study lower bounds for Locality-Sensitive Hashing
(LSH) in the strongest setting: point sets in $\{ 0, 1 \}^d$ under the Hamming distance. Recall that $H$ is
said to be an $(r, c r, p, q)$-sensitive hash family
if all pairs $x, y \in \{ 0, 1 \}^d$ with ${\rm dist}(x, y) \leq r$ have probability at least $p$ of
collision under a randomly chosen $h \in H$, whereas
all pairs $x, y \in \{ 0, 1 \}^d$ with ${\rm dist}(x, y) \geq c r$ have probability at most $q$ of
collision. Typically, one considers $d \to \infty$,
with $c > 1$ fixed and $q$ bounded away from $0$. For
its applications to approximate nearest-neighbor search
in high dimensions, the quality of an LSH family $H$ is
governed by how small its $\rho$ parameter $\rho = \ln (1 / p) / l n(1 / q)$ is as a function of the
parameter $c$. The seminal paper of Indyk and Motwani
[1998] showed that for each $c \geq 1$, the extremely
simple family $H = \{ x \mapsto x$ _i$: i \in [d] \}$ achieves $\rho \leq 1 / c$. The only known lower
bound, due to Motwani et al. [2007], is that $\rho$
must be at least $(e^{1 / c} - 1) / (e^{1 / c} + 1) \geq .46 / c$ (minus $o_d(1)$ ). The contribution of
this article is twofold. (1) We show the optimal''
lower bound for $\rho$: it must be at least $1 / c$
(minus $o_d(1)$ ). Our proof is very simple,
following almost immediately from the observation that
the noise stability of a boolean function at time $t$
is a log-convex function of $t$. (2) We raise and
discuss the following issue: neither the application of
LSH to nearest-neighbor search nor the known LSH lower
bounds hold as stated if the q parameter is tiny. Here,
tiny'' means $q = 2^{- \Theta (d)}$, a parameter
range we believe is natural.",
acknowledgement = ack-nhfb,
articleno =    "5",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Cygan:2014:CCG,
author =       "Marek Cygan and Stefan Kratsch and Marcin Pilipczuk
and Michal Pilipczuk and Magnus Wahlstr{\"o}m",
title =        "Clique Cover and Graph Separation: New
Incompressibility Results",
journal =      j-TOCT,
volume =       "6",
number =       "2",
pages =        "6:1--6:??",
month =        may,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2594439",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Jun 5 14:42:53 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "The field of kernelization studies polynomial-time
preprocessing routines for hard problems in the
we show that, unless the polynomial hierarchy collapses
to its third level, the following parameterized
problems do not admit a polynomial-time preprocessing
algorithm that reduces the size of an instance to
polynomial in the parameter: ---Edge Clique Cover,
parameterized by the number of cliques, ---Directed
Edge/Vertex Multiway Cut, parameterized by the size of
the cutset, even in the case of two terminals,
---Edge/Vertex Multicut, parameterized by the size of
the cutset, and --- k -Way Cut, parameterized by the
size of the cutset.",
acknowledgement = ack-nhfb,
articleno =    "6",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Chen:2014:HIA,
author =       "Yijia Chen and J{\"o}rg Flum and Moritz M{\"u}ller",
title =        "Hard Instances of Algorithms and Proof Systems",
journal =      j-TOCT,
volume =       "6",
number =       "2",
pages =        "7:1--7:??",
month =        may,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2601336",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Jun 5 14:42:53 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "If the class TAUT of tautologies of propositional
logic has no almost optimal algorithm, then every
algorithm A deciding TAUT has a hard sequence, that is,
a polynomial time computable sequence witnessing that A
is not almost optimal. We show that this result extends
to every $\Pi t p$-complete problem with $t \geq 1$;
however, assuming the Measure Hypothesis, there is a
problem which has no almost optimal algorithm but is
decided by an algorithm without hard sequences. For
problems Q with an almost optimal algorithm, we analyze
whether every algorithm deciding Q, which is not almost
optimal, has a hard sequence.",
acknowledgement = ack-nhfb,
articleno =    "7",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Goldberg:2014:CAC,
author =       "Leslie Ann Goldberg and Mark Jerrum",
title =        "The Complexity of Approximately Counting Tree
Homomorphisms",
journal =      j-TOCT,
volume =       "6",
number =       "2",
pages =        "8:1--8:??",
month =        may,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2600917",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Jun 5 14:42:53 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We study two computational problems, parameterised by
a fixed tree H. \#HOMSTO( H ) is the problem of
counting homomorphisms from an input graph G to H.
\#WHOMSTO( H ) is the problem of counting weighted
homomorphisms to H, given an input graph G and a weight
function for each vertex v of G. Even though H is a
tree, these problems turn out to be sufficiently rich
to capture all of the known approximation behaviour in
\# P. We give a complete trichotomy for \#WHOMSTO( H ).
If H is a star, then \#WHOMSTO( H ) is in FP. If H is
not a star but it does not contain a certain induced
subgraph J 3, then \#WHOMSTO( H ) is equivalent under
approximation-preserving (AP) reductions to \#BIS, the
problem of counting independent sets in a bipartite
graph. This problem is complete for the class \#RH \Pi
1 under AP-reductions. Finally, if H contains an
induced J$_3$, then \#WHOMSTO( H ) is equivalent under
AP-reductions to \#SAT, the problem of counting
satisfying assignments to a CNF Boolean formula. Thus,
\#WHOMSTO( H ) is complete for \#P under AP-reductions.
The results are similar for \#HOMSTO( H ) except that a
rich structure emerges if H contains an induced J$_3$.
We show that there are trees H for which \#HOMSTO( H )
is \# SAT -equivalent (disproving a plausible
conjecture of Kelk). However, it is still not known
whether \#HOMSTO( H ) is \#SAT-hard for every tree H
which contains an induced J 3. It turns out that there
is an interesting connection between these
homomorphism-counting problems and the problem of
approximating the partition function of the
ferromagnetic Potts model. In particular, we show that
for a family of graphs Jq, parameterised by a positive
integer q, the problem \#HOMSTO( Jq ) is
AP-interreducible with the problem of approximating the
partition function of the q -state Potts model. It was
not previously known that the Potts model had a
homomorphism-counting interpretation. We use this
connection to obtain some additional upper bounds for
the approximation complexity of \#HOMSTO( Jq ).",
acknowledgement = ack-nhfb,
articleno =    "8",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Etessami:2014:NCC,
author =       "Kousha Etessami and Alistair Stewart and Mihalis
Yannakakis",
title =        "A Note on the Complexity of Comparing Succinctly
Represented Integers, with an Application to Maximum
Probability Parsing",
journal =      j-TOCT,
volume =       "6",
number =       "2",
pages =        "9:1--9:??",
month =        may,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2601327",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Thu Jun 5 14:42:53 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "The following two decision problems capture the
complexity of comparing integers or rationals that are
succinctly represented in product-of-exponentials
notation, or equivalently, via arithmetic circuits
using only multiplication and division gates, and
integer inputs. Input instance: Four lists of positive
integers: a 1,\ldots{}, an \in N+ n; b 1,\ldots{}, bn
\in N+ n; c 1,\ldots{}, cm \in N+ m; d 1, \ldots{}, dm
\in N+ m; where each of the integers is represented in
binary. Problem 1 (equality testing): Decide whether
$a_1 b_1 a_2 b_2 \cdots a_n b_n = c_1 d_1 c_2 d_2 \cdots c_m d_m$. Problem 2 (inequality testing): Decide
whether $a_1 b_1 a_2 b_2 \cdots a_n b_n \geq c_1 d_1 c_2 d_2 \cdots c_m d_m$. Problem 1 is easily decidable
in polynomial time using a simple iterative
algorithm. Problem 2 is much harder. We observe that
the complexity of Problem 2 is intimately connected to
deep conjectures and results in number theory. In
particular, if a refined form of the ABC conjecture
formulated by Baker in 1998 holds, or if the older
Lang-Waldschmidt conjecture (formulated in 1978) on
linear forms in logarithms holds, then Problem 2 is
decidable in P-time (in the standard Turing model of
computation). Moreover, it follows from the best
available quantitative bounds on linear forms in
logarithms, namely, by Baker and W{\"u}stholz [1993] or
Matveev [2000], that if m and n are fixed universal
constants then Problem 2 is decidable in P-time
(without relying on any conjectures). This latter fact
was observed earlier by Shub [1993]. We describe one
application: P-time maximum probability parsing for
arbitrary stochastic context-free grammars (where
\epsilon -rules are allowed).",
acknowledgement = ack-nhfb,
articleno =    "9",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Cook:2014:CCC,
author =       "Stephen A. Cook and Yuval Filmus and Dai Tri Man
L{\^e}",
title =        "The complexity of the comparator circuit value
problem",
journal =      j-TOCT,
volume =       "6",
number =       "4",
pages =        "15:1--15:??",
month =        aug,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2635822",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Mon Aug 18 17:06:20 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "In 1990, Subramanian [1990] defined the complexity
class CC as the set of problems log-space reducible to
the comparator circuit value problem (CCV). He and Mayr
showed that NL $\subseteq$ CC $\subseteq$ P, and
proved that in addition to CCV several other problems
are complete for CC, including the stable marriage
problem, and finding the lexicographically first
maximal matching in a bipartite graph. Although the
class has not received much attention since then, we
are interested in CC because we conjecture that it is
incomparable with the parallel class NC which also
satisfies NL $\subseteq$ NC $\subseteq$ P, and note
that this conjecture implies that none of the
CC-complete problems has an efficient polylog time
parallel algorithm. We provide evidence for our
conjecture by giving oracle settings in which
relativized CC and relativized NC are incomparable. We
give several alternative definitions of CC, including
(among others) the class of problems computed by
uniform polynomial-size families of comparator circuits
supplied with copies of the input and its negation, the
class of problems AC0-reducible to Ccv, and the class
of problems computed by uniform AC0 circuits with AXccv
gates. We also give a machine model for CC, which
corresponds to its characterization as log-space
uniform polynomial-size families of comparator
circuits. These various characterizations show that CC
is a robust class. Our techniques also show that the
corresponding function class FCC is closed under
composition. The main technical tool we employ is
universal comparator circuits. Other results include a
simpler proof of NL $\subseteq$ CC, a more careful
analysis showing the lexicographically first maximal
matching problem and its variants are CC-complete under
AC0 many-one reductions, and an explanation of the
relation between the Gale--Shapley algorithm and
Subramanian's algorithm for stable marriage. This
article continues the previous work of Cook et al.
[2011], which focused on Cook-Nguyen style uniform
proof complexity, answering several open questions
raised in that article.",
acknowledgement = ack-nhfb,
articleno =    "15",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Fellows:2014:FCU,
author =       "Michael R. Fellows and Bart M. P. Jansen",
title =        "{FPT} is characterized by useful obstruction sets:
Connecting algorithms, kernels, and quasi-orders",
journal =      j-TOCT,
volume =       "6",
number =       "4",
pages =        "16:1--16:??",
month =        aug,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2635820",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Mon Aug 18 17:06:20 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "Many graph problems were first shown to be
fixed-parameter tractable using the results of
Robertson and Seymour on graph minors. We show that the
combination of finite, computable obstruction sets and
efficient order tests is not just one way of obtaining
strongly uniform FPT algorithms, but that all of FPT
may be captured in this way. Our new characterization
of FPT has a strong connection to the theory of
kernelization, as we prove that problems with
polynomial kernels can be characterized by obstruction
sets whose elements have polynomial size. Consequently
we investigate the interplay between the sizes of
problem kernels and the sizes of the elements of such
obstruction sets, obtaining several examples of how
results in one area yield new insights in the other. We
show how exponential-size minor-minimal obstructions
for pathwidth $k$ form the crucial ingredient in a
novel or-cross-composition for $k$-Pathwidth,
complementing the trivial and-composition that is known
for this problem. In the other direction, we show that
or-cross-compositions into a parameterized problem can
be used to rule out the existence of efficiently
generated quasi-orders on its instances that
characterize the no-instances by polynomial-size
obstructions.",
acknowledgement = ack-nhfb,
articleno =    "16",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Gobel:2014:CCH,
author =       "Andreas G{\"o}bel and Leslie Ann Goldberg and David
Richerby",
title =        "The complexity of counting homomorphisms to cactus
graphs modulo 2",
journal =      j-TOCT,
volume =       "6",
number =       "4",
pages =        "17:1--17:??",
month =        aug,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2635825",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Mon Aug 18 17:06:20 MDT 2014",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "A homomorphism from a graph $G$ to a graph $H$ is a
function from $V(G)$ to $V(H)$ that preserves edges.
Many combinatorial structures that arise in mathematics
and in computer science can be represented naturally as
graph homomorphisms and as weighted sums of graph
of counting homomorphisms modulo 2. The complexity of
modular counting was introduced by Papadimitriou and
Zachos and it has been pioneered by Valiant who
famously introduced a problem for which counting modulo
7 is easy but counting modulo 2 is intractable. Modular
counting provides a rich setting in which to study the
structure of homomorphism problems. In this case, the
structure of the graph $H$ has a big influence on the
complexity of the problem. Thus, our approach is
graph-theoretic. We give a complete solution for the
class of cactus graphs, which are connected graphs in
which every edge belongs to at most one cycle. Cactus
graphs arise in many applications such as the modelling
of wireless sensor networks and the comparison of
genomes. We show that, for some cactus graphs $H$,
counting homomorphisms to $H$ modulo 2 can be done in
polynomial time. For every other fixed cactus graph
$H$, the problem is complete in the complexity class $\oplus P$, which is a wide complexity class to which
every problem in the polynomial hierarchy can be
reduced (using randomised reductions). Determining
which $H$ lead to tractable problems can be done in
polynomial time. Our result builds upon the work of
Faben and Jerrum, who gave a dichotomy for the case in
which $H$ is a tree.",
acknowledgement = ack-nhfb,
articleno =    "17",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Watson:2014:ALB,
author =       "Thomas Watson",
title =        "Advice Lower Bounds for the Dense Model Theorem",
journal =      j-TOCT,
volume =       "7",
number =       "1",
pages =        "1:1--1:??",
month =        dec,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2676659",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Jan 13 17:53:00 MST 2015",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We prove a lower bound on the amount of nonuniform
advice needed by black-box reductions for the Dense
Model Theorem of Green, Tao, and Ziegler, and of
Reingold, Trevisan, Tulsiani, and Vadhan. The latter
theorem roughly says that for every distribution $D$
that is $\delta$-dense in a distribution that is $\epsilon '$-indistinguishable from uniform, there
exists a dense model'' for $D$, that is, a
distribution that is $\delta$ -dense in the uniform
distribution and is $\epsilon$-indistinguishable from
$D$. This $\epsilon$-indistinguishability is with
respect to an arbitrary small class of functions $F$.
For the natural case where $\epsilon ' \geq \Omega (\epsilon \delta)$ and $\epsilon \geq \delta O(1)$,
our lower bound implies that $\Omega (\sqrt (1 / \epsilon) \log (1 / \delta) \cdot \log | F |)$ advice
bits are necessary for a certain type of reduction that
establishes a stronger form of the Dense Model Theorem
(and which encompasses all known proofs of the Dense
Model Theorem in the literature). There is only a
polynomial gap between our lower bound and the best
upper bound for this case (due to Zhang), which is $O ((1 / \epsilon^2) \log (1 / \delta) \cdot \log | F |)$.
Our lower bound can be viewed as an analogue of list
size lower bounds for list-decoding of error-correcting
codes, but for dense model decoding'' instead.",
acknowledgement = ack-nhfb,
articleno =    "1",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Awasthi:2014:LLF,
author =       "Pranjal Awasthi and Madhav Jha and Marco Molinaro and
title =        "Limitations of Local Filters of {Lipschitz} and
Monotone Functions",
journal =      j-TOCT,
volume =       "7",
number =       "1",
pages =        "2:1--2:??",
month =        dec,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2692372.2692373",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Jan 13 17:53:00 MST 2015",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "We study local filters for two properties of functions
of the form $f : \{ 0, 1 d \} \to R$: the Lipschitz
property and monotonicity. A local filter with additive
error a is a randomized algorithm that is given
black-box access to a function $f$ and a query point
$x$ in the domain of $f$. It outputs a value $f$ (x)
such that (i) the reconstructed function $f(x)$
satisfies the property (in our case, is Lipschitz or
monotone) and (ii) if the input function $f$ satisfies
the property, then for every point $x$ in the domain
(with high constant probability) the reconstructed
value $F(x)$ differs from $f(x)$ by at most $a$.
Local filters were introduced by Saks and Seshadhri
[2010]. The relaxed definition we study is due to
Bhattacharyya et al. [2012], except that we further
relax it by allowing additive error. Local filters for
Lipschitz and monotone functions have applications to
areas such as data privacy. We show that every local
filter for Lipschitz or monotone functions runs in time
exponential in the dimension d, even when the filter is
allowed significant additive error. Prior lower bounds
(for local filters with no additive error, that is,
with $a = 0$) applied only to a more restrictive class
of filters, for example, nonadaptive filters. To prove
our lower bounds, we construct families of hard
functions and show that lookups of a local filter on
these functions are captured by a combinatorial object
that we call a $c$-connector. Then we present a lower
bound on the maximum outdegree of a $c$-connector and
show that it implies the desired bounds on the running
time of local filters. Our lower bounds, in particular,
imply the same bound on the running time for a class of
privacy mechanisms.",
acknowledgement = ack-nhfb,
articleno =    "2",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Greco:2014:CNC,
author =       "Gianluigi Greco and Enrico Malizia and Luigi Palopoli
and Francesco Scarcello",
title =        "The Complexity of the Nucleolus in Compact Games",
journal =      j-TOCT,
volume =       "7",
number =       "1",
pages =        "3:1--3:??",
month =        dec,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2692372.2692374",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Jan 13 17:53:00 MST 2015",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "The nucleolus is a well-known solution concept for
coalitional games to fairly distribute the total
available worth among the players. The nucleolus is
known to be NP -hard to compute over compact
coalitional games, that is, over games whose functions
specifying the worth associated with each coalition are
encoded in terms of polynomially computable functions
over combinatorial structures. In particular, hardness
results have been exhibited over minimum spanning tree
games, threshold games, and flow games. However, due to
its intricate definition involving reasoning over
exponentially many coalitions, a nontrivial upper bound
on its complexity was missing in the literature and
precisely characterizes the complexity of the
nucleolus, by exhibiting an upper bound that holds on
any class of compact games, and by showing that this
bound is tight even on the (structurally simple) class
of graph games. The upper bound is established by
proposing a variant of the standard linear-programming
based algorithm for nucleolus computation and by
studying a framework for reasoning about succinctly
specified linear programs, which are contributions of
interest in their own. The hardness result is based on
an elaborate combinatorial reduction, which is
conceptually relevant for it provides a measure'' of
the computational cost to be paid for guaranteeing
voluntary participation to the distribution process. In
fact, the pre-nucleolus is known to be efficiently
computable over graph games, with this solution concept
being defined as the nucleolus but without guaranteeing
that each player is granted with it at least the worth
she can get alone, that is, without collaborating with
relevant tractable classes of coalitional games, based
on the notion of type of a player. Indeed, in most
applications where many players are involved, it is
often the case that such players do belong in fact to a
limited number of classes, which is known in advance
and may be exploited for computing the nucleolus in a
fast way.",
acknowledgement = ack-nhfb,
articleno =    "3",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}

@Article{Kratsch:2014:KLB,
author =       "Stefan Kratsch and Marcin Pilipczuk and Ashutosh Rai
and Venkatesh Raman",
title =        "Kernel Lower Bounds using Co-Nondeterminism: Finding
Induced Hereditary Subgraphs",
journal =      j-TOCT,
volume =       "7",
number =       "1",
pages =        "4:1--4:??",
month =        dec,
year =         "2014",
CODEN =        "????",
DOI =          "http://dx.doi.org/10.1145/2691321",
ISSN =         "1942-3454 (print), 1942-3462 (electronic)",
ISSN-L =       "1942-3454",
bibdate =      "Tue Jan 13 17:53:00 MST 2015",
bibsource =    "http://www.acm.org/pubs/contents/journals/toct/;
http://www.math.utah.edu/pub/tex/bib/toct.bib",
abstract =     "This work further explores the applications of
co-nondeterminism for showing kernelization lower
bounds. The only known example prior to this work
excludes polynomial kernelizations for the so-called
Ramsey problem of finding an independent set or a
clique of at least $k$ vertices in a given graph
[Kratsch 2012]. We study the more general problem of
finding induced subgraphs on $k$ vertices fulfilling
some hereditary property $\Pi$, called $\Pi$-Induced Subgraph. The problem is NP-hard for all
nontrivial choices of $\Pi$ by a classic result of
Lewis and Yannakakis [1980]. The parameterized
complexity of this problem was classified by Khot and
Raman [2002] depending on the choice of $\Pi$. The
interesting cases for kernelization are for $\Pi$
containing all independent sets and all cliques, since
the problem is trivially polynomial time solvable or
W[1]-hard otherwise. Our results are twofold. Regarding
$\Pi$-Induced Subgraph, we show that for a large
choice of natural graph properties $\Pi$, including
chordal, perfect, cluster, and cograph, there is no
polynomial kernel with respect to $k$. This is
established by two theorems, each one capturing
different (but not necessarily exclusive) sets of
properties: one using a co-nondeterministic variant of
OR-cross-composition and one by a polynomial parameter
transformation from Ramsey. Additionally, we show how
to use improvement versions of NP-hard problems as
source problems for lower bounds, without requiring
their NP-hardness. For example, for $\Pi$-Induced
Subgraph our compositions may assume existing solutions
of size $k - 1$. This follows from the more general
fact that source problems for OR-(cross-)compositions
need only be NP-hard under co-nondeterministic
reductions. We believe this to be useful for further
lower-bound proofs, for example, since improvement
versions simplify the construction of a disjunction
(OR) of instances required in compositions. This adds a
second way of using co-nondeterminism for lower
bounds.",
acknowledgement = ack-nhfb,
articleno =    "4",
fjournal =     "ACM Transactions on Computation Theory",
journal-URL =  "http://portal.acm.org/browse_dl.cfm?idx=J1190",
}
`