• Guest, before posting your code please take these rules into consideration:
    • It is required to use our BBCode feature to display your code. While within the editor click < / > or >_ and place your code within the BB Code prompt. This helps others with finding a solution by making it easier to read and easier to copy.
    • You can also use markdown to share your code. When using markdown your code will be automatically converted to BBCode. For help with markdown check out the markdown guide.
    • Don't share a wall of code. All we want is the problem area, the code related to your issue.


    To learn more about how to use our BBCode feature, please click here.

    Thank you, Code Forum.

Can D simulated by H terminate normally?

polcott

Active Coder
Can D simulated by H terminate normally?

The x86utm operating system based on an open source x86 emulator. This
system enables one C function to execute another C function in debug
step mode. When H simulates D it creates a separate process context for
D with its own memory, stack and virtual registers. H is able to
simulate D simulating itself, thus the only limit to recursive
simulations is RAM.

Code:
// The following is written in C
//
01 typedef int (*ptr)(); // pointer to int function
02 int H(ptr x, ptr y)   // uses x86 emulator to simulate its input
03
04 int D(ptr x)
05 {
06   int Halt_Status = H(x, x);
07   if (Halt_Status)
08     HERE: goto HERE;
09   return Halt_Status;
10 }
11
12 void main()
13 {
14   D(D);
15 }

Execution Trace
Line 14: main() invokes D(D)

keeps repeating (unless aborted)
Line 06: simulated D(D) invokes simulated H(D,D) that simulates D(D)

Simulation invariant
D correctly simulated by H cannot possibly reach its own line 09.

Is it dead obvious to everyone here when examining the execution
trace of lines 14 and 06 above that D correctly simulated by H cannot
possibly terminate normally by reaching its own line 09?
 
Good idea(But we go a little away from the classical halting problem)! It might be assumed that H must report (possibly by aborting the simulation) on the behavior specified by its inputs and by the assumption that H never aborts a simulation.

"A decision problem is a yes-or-no question on an infinite set of inputs"
Decision problem - Wikipedia

When it is understood that D correctly simulated by H
(a) Is the behavior that H must report on and
(b) Cannot possibly terminate normally (thus never halts)**

then it is understood that D is correctly determined to be non-halting.
** Halting requires terminating normally

We can know that D correctly simulated by H must have different
behavior than D(D) directly executed in main() because we can see
(in its execution trace shown above) exactly how the pathological
relationship between D and H changes the behavior of D relative to H.
 
Good idea! It's transformed to:
  • a: line 12: main() calls D(D)
  • b: line 04: D(D) is called
  • c: line 05: D enters its code
  • d: line 06: D calls H(D,D) that calls D(D) in simulation mode
  • e: line 04: D(D) is called (in simulation mode)
  • f: line 05: D enters its code (in simulation mode)
  • g: line 06: D calls H(D,D) (in simulation mode) that calls D(D) in simulation mode
  • h: line 04: D(D) is called (in simulation mode)
  • i: line 06: The simulator returns false (because it has detected a cycle)
  • j: line 09: D returns false
  • k: line 14: D(D) has returned false (but D(D) has halted)
The D(D) that halts is not the same D(D) correctly simulated by H that cannot possibly halt. H is only accountable for reporting on its input and its input cannot possibly terminate normally, thus H is correct to reject this input as non-halting. No computer science decider is ever accountable for non-inputs.
 
Good idea(But we go a little away from the classical halting problem)! It might be assumed that H must report (possibly by aborting the simulation) on the behavior specified by its inputs and by the assumption that H never aborts a simulation.
The question is whether or not D(D) correctly simulated by H can possibly terminate normally. Halting only includes terminating normally. It does not include an aborted simulation. H is correct to report that its input cannot possibly terminate normally. The exact same reasoning equally applies to Turing Machine based halting problem proofs, thus it changes nothing from the classical halting problem.

This paper has additional elaboration, a link to the GIT source code, the same ideas applied to the Peter Linz halting problem proof.

 
Can D simulated by H terminate normally?

The x86utm operating system based on an open source x86 emulator. This
system enables one C function to execute another C function in debug
step mode. When H simulates D it creates a separate process context for
D with its own memory, stack and virtual registers. H is able to
simulate D simulating itself, thus the only limit to recursive
simulations is RAM.

Code:
// The following is written in C
//
01 typedef int (*ptr)(); // pointer to int function
02 int H(ptr x, ptr y)   // uses x86 emulator to simulate its input
03
04 int D(ptr x)
05 {
06   int Halt_Status = H(x, x);
07   if (Halt_Status)
08     HERE: goto HERE;
09   return Halt_Status;
10 }
11
12 void main()
13 {
14   D(D);
15 }

Execution Trace
Line 14: main() invokes D(D)

keeps repeating (unless aborted)
Line 06: simulated D(D) invokes simulated H(D,D) that simulates D(D)

Simulation invariant
D correctly simulated by H cannot possibly reach its own line 09.

Is it dead obvious to everyone here when examining the execution
trace of lines 14 and 06 above that D correctly simulated by H cannot
possibly terminate normally by reaching its own line 09?

A PhD computer science professor independently came up with another
different proof about the halting problem that is identical to my other
proof. He agreed that we totally agree on these two proofs.

My one page paper essentially sums up his seven page paper more succinctly
https://philpapers.org/archive/OLCSRU.pdf
I link to his seven page paper at the bottom of my first page.

He and I equally agree the the halting problem proof places
no actual limit on computation because the halting problem's
decider/input pair is inconsistent, unsatisfiable specification
for this decider input pair.

The inability to do the logically impossible never places
any actual limits on anyone of anything.

That no CAD system can possibly correctly draw a square
circle places no actual limits on computation.

Thus an input D to program H that does the opposite of whatever
H says that it will do derives a self-contradictory question for H
Will your input D halt on its input?
 
Last edited:

New Threads

Buy us a coffee!

300x250
Top Bottom