$ \def\dash{\unicode{8212}} \def\trov{\triangleright} \def\ttrov{\triangleright_t} \def\xtrov{\triangleright_x} \def\imp{\Rightarrow} \def\celse{\ |\ } \def\prov{\vdash} \def\dns{{\downarrow}} \def\ups{{\uparrow}} \def\preq{\dashv\vdash} \def\sh{\#} \def\lol{\multimap} \def\tensor{\otimes} \def\wat#1{\mathrel{@_{#1}}} \def\ast#1{\mathrel{*_{#1}}} \def\bull#1{\mathrel{\bullet_{#1}}} \def\rle{\sqsubseteq} $
Source Language
The source language is a polarized version of the Pfenning-Davies modal language. Its syntax goes like this:
Negative $N$$::=$$P \lol N \celse N \land N \celse \top \celse \Diamond P \celse a^- \celse \ups P$
Positive $P$$::=$$P \tensor P \celse P \lor P \celse 1 \celse 0 \celse \square N \celse a^+ \celse \dns N$
Target Language
The target language is just ordinary focused first order logic, nothing surprising. The first-order stuff has a couple of distinguished sorts and function symbols,
Frames $f$$::=$$p \lol f \celse \phi $
Resources $p$$::=$$p \tensor p \celse \epsilon \celse \alpha$
Worlds $t$$::=$$x$
Frame variables are $\phi$, resource variables are $\alpha$, and $\epsilon$ is the empty resource. Kripke worlds $t$ have no interesting structure. They may only be variables $x$.

We will speak of continuations $k(\alpha)$ that take a resource and return a (negative) target-language proposition, and continuations $\kappa(\phi)$ that take a frame and return a (negative) target-language proposition.

A continuation $k$ is said to be monotone if $p_1 \rle p_2$ implies $k(p_1) \prov k(p_2)$. Similarly $\kappa$ is monotone if $f_1 \rle f_2$ implies $\kappa(f_1) \prov \kappa(f_2)$.

The expression $p \ttrov f$ is basically a negative atom, but with inference rule $$ p \rle p' \qquad f \rle f' \over \Gamma, p\ttrov f \prov p' \ttrov f' $$ which means that $\dash \trov f$ and $p\trov \dash$ are monotone, i.e. that $p_1 \rle p_2$ implies $p_1 \trov f \prov p_2 \trov f$, and $f_1 \rle f_2$ implies $p \trov f_1 \prov p \trov f_2$. For this rule to pass cut and identity we need $\rle$ to be reflexive and transitive.

We sometimes omit target-language shifts for brevity.
Abbrevs and Translation
Here are basically two completely disjoint but equivalent translations, the $@$ translation vs. the $\bullet/*$ translation. The reason I like the $@$ translation is its target language is smaller (basically only negatives, and especially no inequalities to deal with). The reason I like the $\bullet/*$ translation is it looks less complicated to write out the individual clauses, and essentially has no fiddly continuations to worry about if you expand out the definition of $\bullet$ everywhere it's used. Also it's easier to prove stuff about directly, see below. $$ P \bull t k = \forall \alpha. P \ast t \alpha \imp k(\alpha) \qquad P \bull t f = P \bull t (\dash \ttrov f)$$ $$ N \bull t \kappa = \forall \phi. N \ast t \phi \imp \kappa(\phi)\qquad N \bull t p = N \bull t (p \ttrov \dash)$$
$N$$N \wat t \kappa$$N\ast t f$
$P \lol N$ $P \wat t \alpha . N \wat t \phi . \kappa(\alpha \lol \phi)$ $\exists \alpha. (P \ast t \alpha) \tensor \exists \phi . (N \ast t \phi) \tensor (\alpha \lol \phi \rle f)$
$N_1 \land N_2$ $N_1 \wat t \kappa \land N_2 \wat t \kappa$ $N_1 \ast t f \lor N_2 \ast t f$
$\top$ $\top$ $0$
$\Diamond P$ $\forall \phi . \star^+(\phi) \imp (\dns \forall x \ge t . P \wat x (\dash \xtrov \phi)) \imp \kappa(\phi)$ $ \star^+(f) \tensor \dns \forall x \ge t . P \bull x f$
$a^-$ $\forall \phi . \dns a^+(\phi, t) \imp \kappa(\phi)$ $ a^+(f, t)$
$\ups P$ $\forall \phi . \dns P \wat t (\dash \ttrov \phi) \imp \kappa(\phi)$ $\dns P \bull t f$
$P$$P \wat t k$$P \ast t p$
$P_1 \tensor P_2$ $P_1 \wat t \alpha_1 . P_2 \wat t \alpha_2 . k (\alpha_1 * \alpha_2)$ $\exists \alpha_1. (P_1 \ast t \alpha_1) \tensor \exists \alpha_2 . (P_2 \ast t \alpha_2) \tensor (\alpha_1 * \alpha_2 \rle p)$
$P_1 \lor P_2$ $P_1 \wat t k \land P_2 \wat t k$ $P_1 \ast t p \lor P_2 \ast t p$
$1$ $k(\epsilon)$ $\epsilon \rle p$
$0$ $\top$ $0$
$\square N$ $\forall \alpha . (\dns \forall x \ge t . N \wat x (\alpha\xtrov \dash)) \imp k(\alpha)$ $\dns \forall x \ge t . N \bull x p$
$a^+$ $\forall \alpha . a^+(\alpha, t) \imp k(\alpha)$ $a^+(p, t)$
$\dns N$ $\forall \alpha . \dns N \wat t (\alpha\ttrov \dash) \imp k(\alpha)$ $\dns N \bull t p$

Theorems
Theorem The two translations are basically the same. For any monotone continuations $k$ or $\kappa$, we have
$N\wat t \kappa \preq N \bull t \kappa$
$P\wat t k \preq P \bull t k$
Proof: Straightforward induction.
Case: $P \lol N$. $$ \forall \phi' . (\exists \alpha. (P \ast t \alpha) \tensor \exists \phi . (N \ast t \phi) \tensor (\alpha \lol \phi \rle \phi')) \imp \kappa(\phi')$$ $$\preq \forall \alpha . P \ast t \alpha \imp \forall \phi . N \ast t \phi \imp \kappa(\alpha \lol \phi)$$ $$\preq P \wat t \alpha . N \wat t \phi . \kappa(\alpha \lol \phi)$$ Case: $P_1 \tensor P_2$. $$ \forall \alpha . (\exists \alpha_1. (P_1 \ast t \alpha_1) \tensor \exists \alpha_2 . (P_2 \ast t \alpha_2) \tensor (\alpha_1\tensor\alpha_2 \rle \alpha )) \imp k(\alpha)$$ $$\preq \forall \alpha_1 . P_1 \ast t \alpha \imp \forall \alpha_2 . P_2 \ast t \alpha_2 \imp k(\alpha_1 \tensor\alpha_2)$$ $$\preq P_1 \wat t \alpha_1 . P_2 \wat t \alpha_2 . k(\alpha_1 \tensor \alpha_2)$$ Case: $P_1 \lor P_2$. $$ \forall \alpha . (P_1 \ast t \alpha \lor P_2 \ast t \alpha) \imp k(\alpha)$$ $$\preq (\forall \alpha . P_1 \ast t \alpha \imp k(\alpha)) \land (\forall \alpha . P_2 \ast t \alpha \imp k(\alpha))$$ $$\preq P_1 \wat t k \land P_2 \wat t k$$ Case: $1$. $$ \forall \alpha . \epsilon \rle \alpha \imp k(\alpha) \preq k(\epsilon)$$
Brouwer's Lemma For any $p,f,t$, and target language proposition $X$,
$$ \forall \alpha . (\forall \phi' . (\forall \alpha' . X(\alpha') \imp \alpha' \ttrov\phi') \imp \alpha \ttrov \phi' ) \imp (\alpha\ttrov f) \preq \forall \alpha . X(\alpha) \imp (\alpha\ttrov f) $$
$$ \forall \phi . (\forall \alpha' . (\forall \phi' . X(\phi') \imp \alpha'\ttrov\phi' ) \imp \alpha'\ttrov \phi ) \imp (p\ttrov \phi) \preq \forall \phi . X(\phi) \imp (p \ttrov\phi) $$
Proof: Direct. Basically the same as showing $\lnot\lnot\lnot A \preq \lnot A$.

Theorem
$(\dns \ups P) \bull t f \preq P \bull t f$
$(\ups \dns N) \bull t \alpha \preq N \bull t \alpha$
Proof: Direct. We can deduce
$$(\dns \ups P) \bull t f $$ $$= \forall \alpha . (\dns \ups P) \ast t \alpha \imp (\alpha\ttrov f) $$ $$\preq \forall \alpha . (\forall \phi . \ups P \ast t \phi \imp \alpha \ttrov \phi ) \imp (\alpha\ttrov f) $$ $$\preq \forall \alpha . (\forall \phi . (\forall \alpha' . P \ast t \alpha' \imp \alpha' \ttrov\phi) \imp \alpha \ttrov \phi ) \imp (\alpha\ttrov f) $$ $$\preq \forall \alpha . P \ast t \alpha \imp (\alpha\ttrov f) $$ $$= P \bull t f$$ and symmetrically for the other part of the theorem.

Theorem Let $X^\circ$ be $X$ with all double shifts removed.
$ P^\circ \bull t f \preq P \bull t f$
$N^\circ \bull t p \preq N \bull t p$
Proof: If the proposition has some double shifts at the outside, use the previous theorem to eliminate them. In any event, we must make sure the induction hypothesis is applicable to deal with more deeply nested double shifts. For connectives that effectively embed a shift ($\square,\Diamond,\ups,\dns$) this is trivial. For instance, since $\square N \ast t p = \dns \forall x \ge t. N \bull x p$ has the subexpression $N \bull x p$, we can immediately apply the induction hypothesis.

Consider, however, $P \lol N$. We want to show $$(P^\circ \lol N^\circ) \bull t p \preq (P \lol N) \bull t p$$ This expands to $$ \forall \phi' . (\exists \alpha. (P^\circ \ast t \alpha) \tensor \exists \phi . (N^\circ \ast t \phi) \tensor (\alpha \lol \phi\rle\phi')) \imp (p\ttrov\phi') $$ $$\preq \forall \phi' . (\exists \alpha. (P \ast t \alpha) \tensor \exists \phi . (N \ast t \phi) \tensor ( \alpha \lol \phi\rle\phi')) \imp (p\ttrov\phi')$$ which is equivalent (after doing some derivations in the target language) to having to show $$\forall \alpha. (P^\circ \ast t \alpha) \imp \forall \phi . (N^\circ \ast t \phi) \imp (p\ttrov \alpha \lol \phi)$$ $$\preq \forall \alpha. (P \ast t \alpha) \imp \forall \phi . (N \ast t \phi) \imp (p\ttrov \alpha \lol \phi)$$ Now we just have to massage around the first of these propositions so that the $P^\circ$ and $N^\circ$ become amenable to the induction hypothesis. We reason as follows: $$\forall \alpha. (P^\circ \ast t \alpha) \imp \forall \phi . (N^\circ \ast t \phi) \imp (p\ttrov \alpha \lol \phi)$$ $$\preq \forall \alpha. (P^\circ \ast t \alpha) \imp \forall \phi . (N^\circ \ast t \phi) \imp (p \tensor \alpha \ttrov \phi)$$ $$\preq \forall \alpha. (P^\circ \ast t \alpha) \imp \forall \phi . (N \ast t \phi) \imp (p \tensor \alpha \ttrov \phi)$$ $$\preq \forall \phi . (N \ast t \phi) \imp \forall \alpha. (P^\circ \ast t \alpha) \imp ( \alpha \ttrov p \lol \phi)$$ $$\preq \forall \phi . (N \ast t \phi) \imp \forall \alpha. (P \ast t \alpha) \imp ( \alpha \ttrov p \lol \phi)$$ $$\preq \forall \alpha. (P \ast t \alpha) \imp \forall \phi . (N \ast t \phi) \imp (p\ttrov \alpha \lol \phi)$$