Elliptic hypergeometric integrals - HCM: Hausdorff · PDF fileElliptic hypergeometric...

Click here to load reader

  • date post

    30-Aug-2018
  • Category

    Documents

  • view

    218
  • download

    0

Embed Size (px)

Transcript of Elliptic hypergeometric integrals - HCM: Hausdorff · PDF fileElliptic hypergeometric...

  • Elliptic hypergeometric

    integrals

    Eric M. Rains

    Department of Mathematics

    California Institute of Technology

    MPIM Oberseminar, Bonn, 24/7/2008

  • Hypergeometric integrals

    Gaussian integral:

    ex

    2/2dx =

    2

    (Hermite polynomials)

    Gamma integral:

    0x1exdx = ()

    (Laguerre polynomials)

    Beta integral:

    1

    0x1(1 x)1dx = ()()

    ( + )

    (Jacobi polynomials)

    1

  • Why hypergeometric?

    1

    0x1(1 x)1(1 tx)dx

    =

    0k

    ()( + k)( + k)

    ()( + + k)(1 + k)tk

    =()()

    ( + )2F1(, ; + ; t)

    Note transformations:

    2F1(a, b; c; t) = 2F1(b, a; c; t)

    = (1 t)cab2F1(c b, c a; c; t)

    = (1 t)b2F1(c a, b; c;t

    1 t)

    and evaluation

    2F1(a, b; c; 1) =(c)(c a b)(c a)(c b)

    2

  • Multivariate analogues

    (Selberg) Let , , be complex numbers with

    positive real parts. Then

    1

    n!

    [0,1]n

    1i 1. Then1

    n!

    Tn

    1i

  • (Dirichlet) Let 0, . . . , n be complex numbers

    with positive real parts. Then

    xi=1

    i

    xi1i dxi =

    i (i)

    (

    i)

    When are real, a probability distribution (mul-

    tivariate beta)

    (Anderson; earlier by Varchenko, much earlier

    by Dixon): Let 0, . . . , n be complex numbers

    with positive real parts, and let a0 > > an.Then

    xi[ai+1,ai]

    1i

  • Proof of Dirichlet integral: multiply by

    u(

    i)1 exp(u)du = (

    i)

    and change variables xi = yi/u. Proof of Dixon-

    Anderson integral reduces to Dirichlet integral

    by another change of variables; Selberg inte-

    gral follows (per Anderson) by integrating

    0i

  • Probabilistic interpretation of Dixon-Anderson

    Integer i: take a Hermitian matrix with char-acteristic polynomial

    i(ai)i and restrict toa random hyperplane; Dixon-Anderson is dis-

    tribution of new eigenvalues.

    Similarly for a real symmetric matrix with char-

    acteristic polynomial

    i( ai)2i.

    Due to Baryshnikov in complex case with i 1. General complex case follows either by gen-

    eralizing the argument or by taking the limit

    as eigenvalues coalesce.

    Note that the general case of Dixon-Anderson

    follows by (nontrivial) analytic continuation from

    the integer case. And the 1 case is easy,so this gives a (much more complicated) alter-

    nate proof.

    There are similar random matrix interpreta-

    tions for exponential and Gaussian versions of

    Dixon-Anderson.

    6

  • Increasing subsequences

    For a permutation Sn, an increasing sub-sequence is a subset S {1,2, . . . , n} with increasing on S. Let () be the maximum

    size of an increasing subsequence of .

    Theorem (Gessel, Rains) The number of Sn with () k is

    EUU(k)|Tr(U)|2n.

    If we choose n randomly (Poisson: probability

    e2n/n!), then choose Sn uniformly atrandom, the probability that () k is

    EUU(k) exp(2(Tr(U))).

    This looks like the Morris integral (a=b=0,

    = 1), with an extra factor

    exp((zi +1/zi)).

    7

  • Similar results apply to increasing (or decras-

    ing) subsequences of fixed-point-free involu-

    tions. Many other combinatorial models give

    rise to Selberg integrals or discrete analogues

    (first-passage percolation, polynuclear growth,

    totally asymmetric exclusion, domino/lozenge

    tilings, plane partitions). Typically = 1, al-

    though symmetric models may have = 1/2,

    = 2.

    Many of these come from the Jack polynomial

    identity

    P(n) (1,1, . . . ,1; )

    =

    1i

  • q-analogues

    Define

    q(x) :=

    0i(1 qix)1 =: 1/(x; q)

    q(x, y, . . . , z) := q(x)q(y) . . .q(z)

    (Will use similar convention for other func-

    tions)

    Note

    q(qx) = (1 x)q(x)

    limq1

    (1 q)aq(qa)(1 q)bq(qb)

    =(a)

    (b)

    9

  • Macdonald-Morris integral:

    1

    n!

    Tn

    1i

  • Macdonald polynomials

    The Macdonald polynomials P(n) (x1, . . . , xn; q, t)

    are symmetric (invariant under permutationsof x1,. . . ,xn), have leading monomial

    1in xii ,

    and are orthogonal w.r.to Macdonald-Morrisfor a = b = 1.

    Macdonald conjectures:

    1 Explicit formula for P(n) (1, t, . . . , t

    n1; q, t).

    2 Explicit formula for inner product.

    3 Symmetry:

    P(n) (. . . , q

    itni, . . . ; q, t)

    P(n) (. . . , t

    ni, . . . ; q, t)is symmetric in , .

    These generalize to arbitrary (finite) root sys-tems (Cherednik).

    11

  • (Rahman)

    1

    2q(q)

    zS1

    0r4 q(trz1)q(Tz1, z2)

    =

    0r

  • Gustafsons proof

    First, the identity:

    1

    q(q)n2nn!

    zTn

    1i

  • Elliptic analogues

    Ruijsenaars elliptic Gamma function:

    p,q(x) =

    0j,k

    1 pj+1qk+1/x1 pjqkx

    Why elliptic? Consider

    p(x) :=p,q(qx)

    p,q(x)=

    0k(1 pkx)(1 pk+1/x);

    observe that

    p(x) = xp(px),so p is a theta function on the elliptic curve

    C/p.

    Also note

    p,q(px) = q(x)p,q(x)

    p,q(x) = p,q(pq/x)1

    0,q(x) = q(x).

    14

  • Spiridonov (elliptic beta integral):

    1

    2p(p)q(q)

    zS1

    0r5 p,q(trz1)p,q(z2)

    =

    0r

  • Also, following Gustafson (elliptic Dixon-Anderson

    integral):

    1

    p(p)nq(q)n2nn!

    zTn

    1i

  • What about orthogonal polynomials? Already

    a no-go theorem at the univariate level (Askey-

    Wilson polynomials are the most general hy-

    pergeometric orthogonal polynomials). But some-

    thing slightly weaker works: biorthogonal el-

    liptic functions. (Spiridonov/Zhedanov at the

    elliptic level)

    Can we make this work at the multivariate

    level?

    17

  • First key idea: double integral proof of Type II

    integral should give adjoint integral operators.

    Dixon-Anderson case ( = 1):

    Evs((1 vv)A(1 vv)) s(A)

    so Dixon-Anderson integral takes n1 variableSchur functions to n variable Schur functions.

    (Similar to Okounkovs integral representation

    for interpolation polynomials)

    In general, elliptic Dixon-Anderson integral takes

    n1 variable biorthogonal functions to n-variablebiorthogonal functions. (Preservation of biorthog-

    onality is easy; the hard part is showing that

    the image is in the correct space: use a degen-

    erate case of the transformation!)

    18

  • Second key idea: another proof replaces dou-

    ble integral by difference operators

    Alternate raising difference operator with rais-

    ing integral operator; obtain a family of biorthog-

    onal functions.

    Two of three analogues of Macdonalds con-

    jectures are immediate! Proof of remaining

    Macdonald conjecture (symmetry) uses extra

    properties of interpolation functions (a spe-

    cial case of the biorthogonal functions).

    Taking suitable limits gives the Macdonald con-

    jectures for Koornwinder and (ordinary) Mac-

    donald polynomials.

    Big open question: Other root systems?

    19

  • Type II transformations

    Define

    II(n)

    (t0, . . . , t7; t; p, q)

    zTn

    1i

  • Comments:

    (1) Together with permutations of parameters,

    generates group of order 2903040; Weyl group

    E7! In fact, have partial E8 symmetry (dimen-

    sion changes, must remain nonnegative inte-

    ger)

    (2) For t = q, gives solution to elliptic Painleve

    equation (via a tau function). For t =

    q,

    t = q2, obtain a new four-term bilinear recur-

    rence with E8 symmetry. (Proof uses Plucker

    relations for determinants and pfaffians respec-

    tively)

    (3) Multiplying the integrand by interpolation

    functions gives generalization of transforma-

    tion, indexed by one or two pairs of partitions.

    21