Stochastic games with cost functionals J(rho,x)((i))(v) = E0 integral(0)(infinity) e(-rho t)l(i)(y, v) dt, i = 1,2 with controls v = v(1), v(2) and state y(t) with y(0) = x are considered. Each player wants to minimize his (her) cost functional. E denotes the expected value and the state variables y are coupled with the controls v via a stochastic differential equation with initial value x. The corresponding Bellman system, which is used for the calculation of feedback controls v = v(y) and the solvability of the game, leads to a class of diagonal second-order nonlinear elliptic systems, which also occur in other branches of analysis. Their behaviour concerning existence and regularity of solutions is, despite many positive results, not yet well understood, even in the case where the l(i) are simple quadratic functions. The objective of this paper is to give new insight to these questions for fixed rho > 0, and, primarily, to analyse the limiting behaviour as the discount rho --> 0. We find that the modified solutions of the stochastic games converge, for subsequences, to the solution of the so-called ergodic Bellman equation and that the average cost converges. A former restriction of the space dimension has been removed. A reasonable class of quadratic integrands may be treated. More specifically, we consider the Bellman systems of equations -Delta z + lambda = H(x, Dz), where the space variable x belongs to a periodic cube (for the sake of simplifying the presentation). They are shown to have smooth solutions. If u(rho) is the solution of -Delta u(rho) + rho u(rho) = H(x, Du(rho)) then the convergence of u rho - (u) over bar(rho), to z, as rho tends to 0, is established. The conditions on H are such that some quadratic growth in Du is allowed.