Brian Harry Marcus


Relevant Degree Programs


Graduate Student Supervision

Doctoral Student Supervision (Jan 2008 - Nov 2019)
Combinatorial aspects of spatial mixing and new conditions for pressure representation (2016)

Over the last few decades, there has been a growing interest in a measure-theoretical property of Gibbs distributions known as strong spatial mixing (SSM). SSM has connections with decay of correlations, uniqueness of equilibrium states, approximation algorithms for counting problems, and has been particularly useful for proving special representation formulas and the existence of efficient approximation algorithms for (topological) pressure. We look into conditions for the existence of Gibbs distributions satisfying SSM, with special emphasis in hard constrained models, and apply this for pressure representation and approximation techniques in Z^d lattice models. Given a locally finite countable graph G and a finite graph H, we consider Hom(G,H) the set of graph homomorphisms from G to H, and we study Gibbs measures supported on Hom(G,H). We develop some sufficient and other necessary conditions on Hom(G,H) for the existence of Gibbs specifications satisfying SSM (with exponential decay). In particular, we introduce a new combinatorial condition on the support of Gibbs distributions called topological strong spatial mixing (TSSM). We establish many useful properties of TSSM for studying SSM on systems with hard constraints, and we prove that TSSM combined with SSM is sufficient for having an efficient approximation algorithm for pressure. We also show that TSSM is, in fact, necessary for SSM to hold at high decay rate. Later, we prove a new pressure representation theorem for nearest-neighbour Gibbs interactions on Z^d shift spaces, and apply this to obtain efficient approximation algorithms for pressure in the Z² (ferromagnetic) Potts, (multi-type) Widom-Rowlinson, and hard-core lattice gas models. For Potts, the results apply to every inverse temperature except the critical. For Widom-Rowlinson and hard-core lattice gas, they apply to certain subsets of both the subcritical and supercritical regions. The main novelty of this work is in the latter, where SSM cannot hold.

View record

Markov random fields and measures with nearest neighbour Gibbs potential (2015)

The well-known Hammersley-Clifford Theorem states (under certain conditions) that any Markov random field is a Gibbs state for a nearest neighbour interaction. Following Petersen and Schmidt we utilise the formalism of cocycles for the homoclinic relation and introduce "Markov cocycles", reparametrisations of Markov specifications. We exploit this formalism to deduce the conclusion of the Hammersley-Clifford Theorem for a family of Markov random fields which are outside the theorem's purview (including Markov random fields whose support is the d-dimensional "3-colored chessboard"). On the other extreme, we construct a family of shift-invariant Markov random fields which are not given by any finite range shift-invariant interaction.The techniques that we use for this problem are further expanded upon to obtain the following results: Given a "four-cycle free" finite undirected graph H without self-loops, consider the corresponding 'vertex' shift, H ơm(Zd,H) denoted by X(H). We prove that X(H) has the pivot property, meaning that for all distinct configurations x,y ∈ X(H) which differ only at finitely many sites there is a sequence of configurations (x=x¹),x²,...,(xn =y) ∈ X(H) for which the successive configurations (xi,xi+1) differ exactly at a single site. Further if H is connected we prove that X(H) is entropy minimal, meaning that every shift space strictly contained in X(H) has strictly smaller entropy.The proofs of these seemingly disparate statements are related by the use of the 'lifts' of the configurations in X(H) to their universal cover and the introduction of 'height functions' in this context.Further we generalise the Hammersley-Clifford theorem with an added condition that the underlying graph is bipartite. Taking inspiration from Brightwell and Winkler we introduce a notion of folding for configuration spaces called strong config-folding to prove that if all Markov random fields supported on X are Gibbs with some nearest neighbour interaction so are Markov random fields supported on the "strong config-folds" and "strong config-unfolds" of X.

View record

Structure and randomness in dynamical systems: different forms of equicontinuity and sensitivity (2015)

We study topological and measure theoretic forms of mean equicontinuity and mean sensitivityfor dynamical systems. With this we characterize well known notions like systems with discretespectrum, almost periodic functions, and subshifts with regular extensions. We also study the limitbehaviour of µ-equicontinuous cellular automata. In this thesis we prove a conjecture from [55] (see Corollary 2.3.18); this was indepently solved by Li-Tu-Ye in [45]. In Chapter 3 we answer questions from [8].

View record

Capacity of multidimensional constrained channels (2010)

This work considers channels for which the input is constrained to be from a given set of D-dimensional arrays over a finite alphabet. Such a set is called a constraint. An encoder for such a channel transforms arbitrary arrays over the alphabet into constrained arrays in a decipherable manner. The rate of the encoder is the ratio of the size of its input to the size of its output. The capacity of the channel or constraint is the highest achievable rate of any encoder for the channel. We compute the exact capacity of two families of multidimensional constraints. We also generalize a known method for obtaining lower bounds on the capacity, for a certain class of 2-dimensional constraints, and improve the best known bounds for a few constraints of this class.Given a binary D-dimensional constraint, a D-dimensional array with entries in {0,1,⃞} is called "valid", for the purpose of this abstract, if any "filling" of the '⃞'s in the array with '0's and '1's, independently, results in an array that belongs to the constraint. The density of '*'s in the array is called the insertion rate. The largest achievable insertion rate in arbitrary large arrays is called the maximum insertion rate. An unconstrained encoder for a given insertion rate transforms arbitrary binary arrays into valid arrays having the specified insertion rate. The tradeoff function essentially specifies for a given insertion rate the maximum rate of an unconstrained encoder for that insertion rate. We determine the tradeoff function for a certain family of 1-dimensional constraints.Given a 1-dimensional constraint, one can consider the D-dimensional constraint formed by collecting all the D-dimensional arrays for which the original 1-dimensional constraint is satisfied on every "row" in every "direction". The sequence of capacities of these D-dimensional generalizations has a limit as D approaches infinity, sometimes called the infinite-dimensional capacity. We partially answer a question of [37], by proving that for a large class of 1-dimensional constraints with maximum insertion rate 0, the infinite dimensional capacity equals 0 as well.

View record

Master's Student Supervision (2010 - 2018)
Markov Random Fields and Measures with Nearest Neighbour Gibbs Potential (2011)

This thesis will discuss the relationship between stationary Markov random fields and probability measures with a nearest neighbour Gibbs potential. While the relationship has been well explored when the measures are fully supported, we shall discuss what happens when we weaken this assumption.

View record


Membership Status

Member of G+PS
View explanation of statuses

Program Affiliations


If this is your researcher profile you can log in to the Faculty & Staff portal to update your details and provide recruitment preferences.


Learn about our faculties, research, and more than 300 programs in our 2021 Graduate Viewbook!