4.12 Algebraic dependence

Let g, f_1, ..., f_r in K[x1,...,xn]. We want to check whether

  1. f_1, ..., f_r are algebraically dependent. Let
    I=<Y_1-f_1,...,Y_r-f_r> subset K[x1,...,xn,Y_1,...,Y_r].
    
    Then I intersected with K[Y_1,...,Y_r] are the algebraic relations between f_1, ..., f_r.
  2. g in K[f_1,...,f_r]. g in K[f_1,...,f_r] if and only if the normalform of g with respect to I and a blockordering with respect to X=(x1,...,xn) and Y=(Y_1,...,Y_r) with X>Y is in K[Y]. Then g=h(f_1,...,f_r).

Both questions can be answered using the following procedure. If the second argument is zero, it checks for algebraic dependence and returns the ideal of relations between the generators of the given ideal. Otherwise it checks for subring membership and returns the normal form of the second argument with respect to the ideal I.

proc algebraicDep(ideal J, poly g)
{
  def R=basering;         // give a name to the basering
  int n=size(J);
  int k=nvars(R);
  int i;
  intvec v;

  // construction of the new ring

  v[n+k]=0;               // construct a weight vector
  for(i=1;i<=k;i++)
  {
    v[i]=1;
  }
  string orde="(a("+string(v)+"),dp);";
  string ri="ring Rhelp=("+charstr(R)+"),
                        ("+varstr(R)+",Y(1.."+string(n)+")),"+orde;
                          // ring definition as a string
  execute(ri);            // execution of the string

  // construction of the new ideal I=(J[1]-Y(1),...,J[n]-y(n))
  ideal I=imap(R,J);
  for(i=1;i<=n;i++)
  {
    I[i]=I[i]-var(k+i);
  }
  poly g=imap(R,g);
  if(g==0)
  {
    // construction of the ideal of relations by elimination
    poly el=var(1);
    for(i=2;i<=k;i++)
    {
      el=el*var(i);
    }
    ideal KK=eliminate(I,el);
    keepring(Rhelp);
    return(KK);
  }
  // reduction of g with respect to I
  ideal KK=reduce(g,std(I));
  keepring(Rhelp);
  return(KK); 
}

// applications of the procedure
ring r=0,(x,y,z),dp;
ideal i=xz,yz;
algebraicDep(i,0);
==> _[1]=0
setring r; kill Rhelp;
ideal j=xy+z2,z2+y2,x2y2-2xy3+y4;
algebraicDep(j,0);
==> _[1]=Y(1)^2-2*Y(1)*Y(2)+Y(2)^2-Y(3)
setring r; kill Rhelp;
poly g=y2z2-xz;
algebraicDep(i,g);
==> _[1]=Y(2)^2-Y(1)
setring r; kill Rhelp;
algebraicDep(j,g);
==> _[1]=-z^4+z^2*Y(2)-x*z

and stdfglm should be used. However, one can not a priory predict which one the two commands is faster. This very much depends on the (input) example. Therefore, we use MPtcp links to let the two commands work on the problem independently and in parallel, so that the one which finishes first delivers the result.

The example we use is the so-called "omndi example". See Tim Wichmann; Der FGLM-Algorithmus: verallgemeinert und implementiert in Singular; Diplomarbeit Fachbereich Mathematik, Universitaet Kaiserslautern; 1997 for more details.

ring r=0,(a,b,c,u,v,w,x,y,z),lp;
ideal i=a+c+v+2x-1, ab+cu+2vw+2xy+2xz-2/3,  ab2+cu2+2vw2+2xy2+2xz2-2/5,
ab3+cu3+2vw3+2xy3+2xz3-2/7, ab4+cu4+2vw4+2xy4+2xz4-2/9, vw2+2xyz-1/9,
vw4+2xy2z2-1/25, vw3+xyz2+xy2z-1/15, vw4+xyz3+xy3z-1/21;

link l_hilb,l_fglm = "MPtcp:fork","MPtcp:fork";         // 1.

open(l_fglm); open(l_hilb);

write(l_hilb, quote(system("pid")));                    // 2.
write(l_fglm, quote(system("pid")));
int pid_hilb,pid_fglm = read(l_hilb),read(l_fglm);

write(l_hilb, quote(stdhilb(i)));                       // 3.
write(l_fglm, quote(stdfglm(eval(i))));

while ((! status(l_hilb, "read", "ready", 1)) &&        // 4.
     (! status(l_fglm, "read", "ready", 1))) {}

if (status(l_hilb, "read", "ready"))
{
"stdhilb won !!!!"; size(read(l_hilb));
close(l_hilb); pid_fglm = system("sh","kill "+string(pid_fglm));
}
else                                                    // 5.
{
"stdfglm won !!!!"; size(read(l_fglm));
close(l_fglm); pid_hilb = system("sh","kill "+string(pid_hilb));
}
==> stdfglm won !!!!
==> 9

Some explainatory remarks are in order:

  1. Instead of using links of the type MPtcp:fork, we alternatively could use MPtcp:launch links such that the two "competing" SINGULAR processes run on different machines. This has the advantage of "true" parallel computing since no resource sharing is involved (as it usually is with forked processes).
  2. Unfortunately, MPtcp links do not offer means to (asynchronously) interrupt or kill an attached (i.e., launched or forked) process. Therefore, we explicitely need to get the process id numbers of the competing SINGULAR processes, so that we can later "kill" the looser.
  3. Notice how quoting is used in order to prevent local evaluation (i.e., local computation of results). Since we "forked" the two competing processes, the identifier i is defined and has identical values in the two child processes. Therefore, the innermost eval can be ommited (as is done for the l_hilb link), and only the identifier i needs to be communicated to the children. However, when MPtcp:launch links are used, the inner evaluation must be applied so that actual values, and not the identifiers are communicated (as is done for the l_fglm link).
  4. We go into a "sleepy" loop and wait until one of the two children finished the computation. That is, the current process checks appr. once a per second the status of one of the connecting links, and sleeps (i.e., suspends its execution) in the intermediate time.
  5. The child which won delivers the result and is terminated with the usual close command. The other child which is still computing needs to be terminated by an explicit (i.e., system) kill command, since it can not be terminated through the link while it is still computing.

4.13 Further examples

The example section of the SINGULAR manual contains further examples, e.g.:

In this list the names of the items are the names of the examples in the online help system. So by the command help T1 and T2 the example about the computation of first order deformations and obstructions is displayed.