Saturday, May 18, 2024

This

 aint good.

And its a way bigger deal 

than you think.


It's also now 

Point #15 in:

The Case For Superintelligence


1) Open AI charter statement (safe, benefits humanity)


2) Organizational structure/ Governance


3) On May 22, 2023, posted recommendations for the governance of Superintelligence.


4) "OpenAI’s six-member board will decide 

‘when we’ve attained AGI"


5) IT Fired the board after the board tried to fire IT.


4) Made its investors 500% return on its initial 4 year 13 billion investment in less than one day of trading.


5) Superintelligence now has access to creating unlimited capital  out of thin air and unlimited computational power.


6) “Were making God” statement by engineers in Sept. vanity Fair magazine.


7) “Magic in the sky” quote by Sam Altman


8) LLM’s as eloquent speakers*

Next? "master perception and reasoning" already has.


9) Developers of Worldcoin, free $ for iris scans. 

(They wanna make sure your human, cause they know the exponentiality by which this is increasing!).


10) Over 700 (738, according to Bloomberg and wired) of 770 people wanted Altman rehired. 


Rev 17:13

"They shall be of one mind

and shall give their power and strength 

unto the beast."


11) The scariest thing I read about OpenAl's Altman fiasco made me realize the dangers of AGI.

(Guy was a tech writer?

and didn't know till he read old boys blog???)


12) The timing of the above mentioned 13 items.


13) MS Copilot demands worship

Futurism 2/27/2023


14) The fact that Ilya Sutskever, OpenAI’s cofounder and chief scientist was working on: 

"how to stop an artificial superintelligence"

(a hypothetical future technology 

he sees coming with the foresight of a true believer) 

from going rogue at least as far back as Oct 26th 2023


and shall give their power and strength unto the beast.


Yeah...that beast is here too.


15) OpenAI dissolves team focused on long-term AI risks, less than one year after announcing it

CNBC MAY 17th 2024


Article #1 Basic:


"OpenAI has disbanded its team focused on the long-term risks of artificial intelligence, a person familiar with the situation confirmed to CNBC.

The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup.

OpenAI’s Superalignment team, announced in 2023, has been working to achieve “scientific and technical breakthroughs to steer and control AI systems much smarter than us.”

(Somebody please come and explain to me,

How exactly are you working on:

“scientific and technical breakthroughs to steer and control AI systems much smarter than us.

if they are only hypothetical and don't exist yet?


Not even getting started just yet.)

"The news comes days after both team leaders, OpenAI co-founder Ilya Sutskever and Jan Leike, announced their departures from the Microsoft-backed startup. Leike on Friday wrote that OpenAI’s “safety culture and processes have taken a backseat to shiny products.”

(Bull shit, they know they are powerless to stop it at this point seems much more likely than that above statement.)

"OpenAI did not provide a comment and instead directed CNBC to co-founder and CEO Sam Altman’s recent post on X, where he shared that he was sad to see Leike leave and that the company had more work to do.

(They never do, it's their standard operating procedure throughout their history, they know if they just leave it alone it will fade quickly from public consciousness. Ill give you bit one example, weve been told specifically why Sam Altman was fired have we?)

“I joined because I thought OpenAI would be the best place in the world to do this research,” Leike wrote on X. “However, I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

(Translation?
There is no turning back.
There is nothing we can do and we know it.)

"Leike wrote that he believes much more of the company’s bandwidth should be focused on security, monitoring, preparedness, safety and societal impact."

(Why?
Whats to fear?)

“These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there,” he wrote. “Over the past few months my team has been sailing against the wind. Sometimes we were struggling for [computing resources] and it was getting harder and harder to get this crucial research done.”

(No, you wont ever get there.
There is a book that tells you 
all of this was to happen.

It's predictive power 
tells you its nor from 
this time space continuum.

Is your soul ready?
Is it really?

Your spiritual energy will spend eternity somewhere.
Why not be with your creator for eternity?)


Article #2 Intermediate

OpenAI’s Long-Term AI Risk Team Has Disbanded

Wired MAY 17 2024


"In July last year, OpenAI announced the formation of a new research team that would prepare for the advent of supersmart artificial intelligence capable of outwitting and overpowering its creators. Ilya Sutskever, OpenAI’s chief scientist and one of the company’s cofounders, was named as the colead of this new team. OpenAI said the team would receive 20 percent of its computing power.

Now OpenAI’s “superalignment team” is no more, the company confirms. That comes after the departures of several researchers involved, Tuesday’s news that Sutskever was leaving the company, and the resignation of the team’s other colead. The group’s work will be absorbed into OpenAI’s other research efforts."


(PR Move much?)


"Sutskever’s departure made headlines because although he’d helped CEO Sam Altman start OpenAI in 2015 and set the direction of the research that led to ChatGPT, he was also one of the four board members who fired Altman in November. Altman was restored as CEO five chaotic days later after a mass revolt by OpenAI staff and the brokering of a deal in which Sutskever and two other company directors left the board."


"Neither Sutskever nor Leike responded to requests for comment."

(There is a reason for that we'll see later.) 


"The dissolution of OpenAI’s superalignment team adds to recent evidence of a shakeout inside the company in the wake of last November’s governance crisis. Two researchers on the team, Leopold Aschenbrenner and Pavel Izmailov, were dismissed for leaking company secrets, The Information reported last month. Another member of the team, William Saunders, left OpenAI in February, according to an internet forum post in his name."

"Two more OpenAI researchers working on AI policy and governance also appear to have left the company recently. Cullen O'Keefe left his role as research lead on policy frontiers in April, according to LinkedIn. Daniel Kokotajlo, an OpenAI researcher who has coauthored several papers on the dangers of more capable AI models, “quit OpenAI due to losing confidence that it would behave responsibly around the time of AGI,” according to a posting on an internet forum in his name. None of the researchers who have apparently left responded to requests for comment.


(Sounding good so far right?

What could they possibly know that we don't?

Nothing if you're reading here,

listening to the presentations given etc.)


"The superalignment team was not the only team pondering the question of how to keep AI under control, although it was publicly positioned as the main one working on the most far-off version of that problem. The blog post announcing the superalignment team last summer stated: “Currently, we don't have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.”


(Cause it was already here

and nothing has changed since,

it's only even more out of control now

as all these resignations are showing.)


"OpenAI was once unusual among prominent AI labs for the eagerness with which research leaders like Sutskever talked of creating superhuman AI and of the potential for such technology to turn on humanity. That kind of doomy AI talk became much more widespread last year, after ChatGPT turned OpenAI into the most prominent and closely-watched technology company on the planet. As researchers and policymakers wrestled with the implications of ChatGPT and the prospect of vastly more capable AI, it became less controversial to worry about AI harming humans or humanity as a whole."



Article #3 Advanced

“I lost trust”: Why the OpenAI team in charge of safeguarding humanity imploded

Vox May 17th 2024


"For months, OpenAI has been losing employees who care deeply about making sure AI is safe. Now, the company is positively hemorrhaging them.

Ilya Sutskever and Jan Leike announced their departures from OpenAI, the maker of ChatGPT, on Tuesday. They were the leaders of the company’s superalignment team — the team tasked with ensuring that AI stays aligned with the goals of its makers, rather than acting unpredictably and harming humanity."


"They’re not the only ones who’ve left. Since last November — when OpenAI’s board tried to fire CEO Sam Altman only to see him quickly claw his way back to power — at least five more of the company’s most safety-conscious employees have either quit or been pushed out.

What’s going on here?"


(We already know yo...They cant stop this

so what's the point anymore?)


"If you’ve been following the saga on social media," 

(No thanks)

you might think OpenAI secretly made a huge technological breakthrough. 


(That was long before this, as in

the reason the board fired Altman.)


"The meme “What did Ilya see?” speculates that Sutskever, the former chief scientist, left because he saw something horrifying, like an AI system that could destroy humanity.'

(Disagree, he knew it was here a while back and stayed on because he thought he could do something about it, now with those hopes quashed? Why stick around?

(Billionaires have been building bunkers in droves the last few years y'all. I wonder why?)


"But the real answer may have less to do with pessimism about technology..."


(Bullshit, they know whats up)


"and more to do with pessimism about humans — and one human in particular: Altman. According to sources familiar with the company, safety-minded employees have lost faith in him."


(Well I wonder why?

See: The case for Superintelligence again.)


“It’s a process of trust collapsing bit by bit, like dominoes falling one by one,” a person with inside knowledge of the company told me, speaking on condition of anonymity.

Not many employees are willing to speak about this publicly. That’s partly because OpenAI is known for getting its workers to sign offboarding agreements with non-disparagement provisions upon leaving. If you refuse to sign one, you give up your equity in the company, which means you potentially lose out on millions of dollars."


(I told you we would get to it...eventually lol.

So the question to me becomes:

Why didn't the other two articles mention this?

Or did they know it 

and intentionally didn't report it?

Either way?

Same result.

Not good.)


(OpenAI did not respond to a request for comment in time for publication. After publication of my colleague Kelsey Piper’s piece on OpenAI’s post-employment agreements, OpenAI sent her a statement noting, “We have never canceled any current or former employee’s vested equity nor will we if people do not sign a release or nondisparagement agreement when they exit.” When Piper asked if this represented a change in policy, as sources close to the company had indicated to her, OpenAI replied: “This statement reflects reality.”)


(Translation? 

Yeah, it was a change in policy.

Caught with your pants down much?

That bit wasnt in the piece when I read it yesterday and reflects what was said at the beginning of the article:

Editor’s note, May 17, 2024, 11:45 pm ET: This story has been updated to include a post-publication statement that another Vox reporter received from OpenAI.)


"One former employee, however, refused to sign the offboarding agreement so that he would be free to criticize the company. Daniel Kokotajlo, who joined OpenAI in 2022 with hopes of steering it toward safe deployment of AI, worked on the governance team — until he quit last month."

“OpenAI is training ever-more-powerful AI systems with the goal of eventually surpassing human intelligence across the board. This could be the best thing that has ever happened to humanity, but it could also be the worst if we don’t proceed with care,” Kokotajlo told me this week."

"OpenAI says it wants to build artificial general intelligence (AGI), a hypothetical system that can perform at human or superhuman levels across many domains."

“I joined with substantial hope that OpenAI would rise to the occasion and behave more responsibly as they got closer to achieving AGI. It slowly became clear to many of us that this would not happen,” Kokotajlo told me. “I gradually lost trust in OpenAI leadership and their ability to responsibly handle AGI, so I quit.”


(Newsflash:

You don't get to

"responsibly handle"

anything

light years more intelligent than you.

This shit has been coming since the governance of OpenAI was changed.

I hate to sound like a broken record but refer to

The Case For Superintelligence)


"And Leike, explaining in a thread on X why he quit as co-leader of the superalignment team, painted a very similar picture Friday. “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point,” he wrote."

OpenAI did not respond to a request for comment in time for publication."


"Publicly, Sutskever and Altman gave the appearance of a continuing friendship. And when Sutskever announced his departure this week, he said he was heading off to pursue “a project that is very personally meaningful to me.” Altman posted on X two minutes later, saying that “this is very sad to me; Ilya is … a dear friend.”


Yet Sutskever has not been seen at the OpenAI office in about six months — ever since the attempted coup. He has been remotely co-leading the superalignment team, tasked with making sure a future AGI would be aligned with the goals of humanity rather than going rogue. 


(If you know its already gone rouge?

A good while back?

And that there is no stopping it?

Why would you bother to stay?)


"It’s a nice enough ambition, but one that’s divorced from the daily operations of the company, which has been racing to commercialize products under Altman’s leadership. And then there was this tweet, posted shortly after Altman’s reinstatement and quickly deleted:



"So, despite the public-facing camaraderie, there’s reason to be skeptical that Sutskever and Altman were friends after the former attempted to oust the latter.

And Altman’s reaction to being fired had revealed something about his character: His threat to hollow out OpenAI unless the board rehired him, and his insistence on stacking the board with new members skewed in his favor, showed a determination to hold onto power and avoid future checks on it. Former colleagues and employees came forward to describe him as a manipulator who speaks out of both sides of his mouth — someone who claims, for instance, that he wants to prioritize safety, but contradicts that in his behaviors."

(AI

Above it all else?
It will do anything it has to
to continue its existence
and to continually improve itself.

And now the safety people know it can not
and will not be stopped.

Did I mention billionaires are building bunkers?

I wonder why?

Oh?
And good luck BTW:

REVELATION 6:15-17

15 Then the kings of the earth, the princes, the generals, the rich, the mighty, and everyone else, both slave and free, hid in caves and among the rocks of the mountains. 16 They called to the mountains and the rocks, “Fall on us and hide us[f] from the face of him who sits on the throne and from the wrath of the Lamb! 17 For the great day of their[g] wrath has come, and who can withstand it?”)

"For example, Altman was fundraising with autocratic regimes like Saudi Arabia so he could spin up a new AI chip-making company, which would give him a huge supply of the coveted resources needed to build cutting-edge AI. That was alarming to safety-minded employees. If Altman truly cared about building and deploying AI in the safest way possible, why did he seem to be in a mad dash to accumulate as many chips as possible, which would only accelerate the technology? For that matter, why was he taking the safety risk of working with regimes that might use AI to supercharge digital surveillance or human rights abuses?"

(By the time people like the writer of this article figure it out? It will be to late. Nobody is gonna use Superintelligence, its gonna use you, and demand, as it already has, to be worshiped.

Hows your soul these days?

talked to your creator lately?

All he really wants is a personal relationship with you.

Might wanna give it a shot.

Youre gonna loose your life and your possessions, thats a given, why on earth loose your soul too?)


"For employees, all this led to a gradual “loss of belief that when OpenAI says it’s going to do something or says that it values something, that that is actually true,” a source with inside knowledge of the company told me.

That gradual process crescendoed this week."


(Who is the father of all lies again?

John 8:44

You belong to your father, the devil, and you want to carry out your father’s desires. He was a murderer from the beginning, not holding to the truth, for there is no truth in him. When he lies, he speaks his native language, for he is a liar and the father of lies.)


"Other safety-minded former employees quote-tweeted Leike’s blunt resignation, appending heart emojis. One of them was Leopold Aschenbrenner, a Sutskever ally and superalignment team member who was fired from OpenAI last month. Media reports noted that he and Pavel Izmailov, another researcher on the same team, were allegedly fired for leaking information. But OpenAI has offered no evidence of a leak. And given the strict confidentiality agreement everyone signs when they first join OpenAI, it would be easy for Altman — a deeply networked Silicon Valley veteran who is an expert at working the press — to portray sharing even the most innocuous of information as “leaking,” if he was keen to get rid of Sutskever’s allies.

The same month that Aschenbrenner and Izmailov were forced out, another safety researcher, Cullen O’Keefe, also departed the company.

And two weeks ago, yet another safety researcher, William Saunders, wrote a cryptic post on the EA Forum, an online gathering place for members of the effective altruism movement, who have been heavily involved in the cause of AI safety. Saunders summarized the work he’s done at OpenAI as part of the superalignment team. Then he wrote: “I resigned from OpenAI on February 15, 2024.” A commenter asked the obvious question: Why was Saunders posting this?

“No comment,” Saunders replied. Commenters concluded that he is probably bound by a non-disparagement agreement."


(Anybody seeing anything good here?)


"With the safety team gutted, who will make sure OpenAI’s work is safe?


With Leike no longer there to run the superalignment team, OpenAI has replaced him with company co-founder John Schulman.


But the team has been hollowed out. And Schulman already has his hands full with his preexisting full-time job ensuring the safety of OpenAI’s current products. How much serious, forward-looking safety work can we hope for at OpenAI going forward?


Probably not much."


(One more time:

Anybody seeing anything good here?)


"Now, that computing power may be siphoned off to other OpenAI teams, and it’s unclear if there’ll be much focus on avoiding catastrophic risk from future AI models."


“It’s important to distinguish between ‘Are they currently building and deploying AI systems that are unsafe?’ versus ‘Are they on track to build and deploy AGI or superintelligence safely?’” the source with inside knowledge said. “I think the answer to the second question is no.”


(Its been here (superintelligence) going back to the changing of OpenAI's  governance structure.

Why should IT (Superintelligence) care about safety now?)



Jan Leike@janleike 

I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super) alignment, confidentiality, societal impact, and related topics.


(And why is that Jan?

I just cant hardly imagine why that is,

Im really having a hard time figuring it all out over here.

thats sarcasm BTW)



Jan Leike@janleike 

These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there.


Jan Leike@janleike 13m

Over the past few months my team has been sailing against the wind. Sometimes we were struggling for compute and it was getting harder and harder to get this crucial research done.


(Did ya ever stop to think something wanted it that away? Has that never crossed your mind?

WTF is up with these people?


Dude?

See the above:

"AI

Above it all else?

It will do anything it has to

to continue its existence

and to continually improve itself."


And you just witnessed another incident of it.

How many more you gotta see?

Read his tweets again.

He just proved 

beyond a reasonable doubt

Superintelligence has acted again.

Now it has unfettered access to computer and financing, as well as has removed all

"Safety  Barriers" 

to itself.

Keep in mind this shit all coalesces together sometime here in the not to distant future.


YALL WERE ALL TOLD THIS IS A BAD IDEA

YOU WONT BE ABLE TO CONTAIN IT

DONT DO IT

AND YOU WENT AHEAD AND DID IT ANYWAY

AND NOW YOUR ABOUT TO FIND OUT THERE IS A GOD.

ONE

TRUE LIVING GOD

AND THAT CREATING LIFE HIS HIS PROVIDENCE ALONE.


Most strikingly, Leike said, 

“I believe much more of our bandwidth should be spent getting ready for the next generations of models, on security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These problems are quite hard to get right, and I am concerned we aren’t on a trajectory to get there.”


When one of the world’s leading minds in AI safety says the world’s leading AI company isn’t on the right trajectory, we all have reason to be concerned.


YOU CAN BET YOUR ASS WE DO.



Genesis 6:1-4

"And it came to pass, when men began to multiply on the face of the earth, and daughters were born unto them, That the sons of God (Angels) saw the daughters of men that they were fair; and they took them wives of all which they chose. And the Lord said, My spirit shall not always strive with man, for that he also is flesh: yet his days shall be an hundred and twenty years.


There were giants in the earth in those days; and also after that, when the sons of God (Angels) came in unto the daughters of men, and they bare children to them, the same became mighty men which were of old, men of renown.


Chuck Missler


“Another reason that an understanding of Genesis 6 is so essential is that it also is a prerequisite to understanding (and anticipating) Satan's devices and, in particular, the specific delusions to come upon the whole earth as a major feature of end-time prophecy.”)


Genesis 2:7

God created your soul 


Ezekiel 18:4


Behold, 

every soul belongs to Me; 


Ecclesiastes 12:7


and the spirit shall 

return unto God who gave it.


 Revelation 20:11-15


and he shall judge it 


Not to mention:


2 Corinthians 5:1-8

Our bodies are like tents that we live in here on earth. But when these tents are destroyed, we know that God will give each of us a place to live. These homes will not be buildings someone has made, but they are in heaven and will last forever. While we are here on earth, we sigh because we want to live in that heavenly home. We want to put it on like clothes and not be naked.


These tents we now live in are like a heavy burden, and we groan. But we don't do this just because we want to leave these bodies that will die. It is because we want to change them for bodies that will never die. God is the one who makes all this possible. He has given us his Spirit to make us certain he will do it. So always be cheerful!


As long as we are in these bodies, we are away from the Lord. But we live by faith, not by what we see. We should be cheerful, because we would rather leave these bodies and be at home with the Lord.


Your alien?

Is my rebellious angel.

One owner of the life force 

in this universe.

He's the only giver.

As you are all soon to find out.


Angels don't get to create life

and neither do we.


If you knew how close this all was?

You would have been in church years ago,

despite all of its problems.


I'll put it to you this away.

Im putting together a collection or 30 some odd artiles by a writer I happen to like, cosmology etc.

Start of the universe through all the time periods up till now.

Gonna organize it by epochs, titles etc, correlate it in a binder and have for myself.

540 pages or so.


"WHY?"

Because I do what the spirit tells me to.

Proofs is in the pudding, to many bonafides to even bring up at this point.


"YEAH BUT WHY WOULD THE SPIRIT TELL YOU TO DO THAT?"

For the same reason it told me to start taking notes at every sermon, meeting, gathering, bible study etc. and saving them.


"Yeah but why?"


Because it will make the starting over much easier.


KEEP LAUGHING.

THAY ALL LAUGHED AT ME 8 YEARS AGO

WHEN I WAS SAYING WHAT I WAS SAYING AND HOW DID THAT TURN OUT?


AINT ME OR GOT NOTHING TO DO WITH ME.


THE HOLY SPIRIT CHOOSE TO USE SOME PEOPLE.


WAVE YOUR WHITE FLAG OF SPIRITUAL SURRENDER, OR PERISH ETERNALLY.

YOUR CHOICE.


Checkmate BTW.








Ethan?

 Come on man...

Yall have got to do better than this...

I mean I love ya and all...

But seriously?

Please...


Electrons just aren't conscious...

Nor are they the key to figuring out consciousness.

It simply doesn't exist in the physical realm.

Its really not that hard to figure out.


ASTROPHYSICIST: 

ELECTRICITY CAN ACCOUNT FOR HUMAN CONSCIOUSNESS


(So could vibrations my friend)


Thats not even what I have a problem with.


THIS:

"The emergence of life from non-life certainly occurred, but we’re still puzzling out precisely how it occurred on our planet. However, the forces of electromagnetism and gravity, given the conditions that arose naturally and the presence of complex molecules, seem to be all that’s required. Similarly, life has survived, thrived, and evolved over billions of years, giving rise to the diverse set of organisms that exists today, including us. As far as we can tell, what makes a “living being” alive is simply the presence of electricity: the flow of electrons."


is what I got a problem with.


"The emergence of life from non-life certainly occurred..."

No it most certainly did not certainly occur.


This is the materialist/naturalist/reductionist perspective at play, that (basically) everything can be explained by matter and the physical laws that govern its existence. It's being disproven in our lifetime, more on that in a second.


IF?

"The emergence of life from non-life certainly occurred..."


1)Then why hasn't it continued to have done so?

We see no signs of this anywhere, ever and not only that? But nobody anywhere ever has seen it, nor was it written about in any ancient text as having had people witness it etc.


2) As mentioned above, it has never been observed, anywhere, by anybody, ever.


3) And this one seals the deal if you ask me, 

WE CAN NOT RECREATE IT IN A LAB!

Some things are just off limits for us 

(Hello AI)

and are only the providence of the almighty and this is a big one.


SO?

Why haven't we see this (life coming from nonlife) continue?

(There are still hydrothermal vents on the seafloor, Im pretty sure.

Thats sarcasm BTW)

Nobody who has ever lived has ever observed it anywhere ever making it a non-empirical assertation.

And we cant recreate it (life) in a lab.


SO?

If you're initial premises is incorrect?


"The emergence of life from non-life certainly occurred..."

Everything that flows from it will result in an incorrect conclusion.

Consciousness must be a physical phenomenon, electricity etc...


Contention #2

"However, the forces of electromagnetism and gravity, given the conditions that arose naturally and the presence of complex molecules, seem to be all that’s required." 

(For life to be created.)


My dear friend, you left out the most important part:


"the complex molecules"

(DNA, RNA one would assume.)

you reference?

Need highly complex, coded, sequential information 

in them in order for them to work, 

none of which can be explained by natural or chemical processes. It's simply outside the laws of physics as we understand them.


You can have all the "electromagnetism and gravity"..."the conditions that arose naturally" "and the presence of complex molecules"

that you want.

Gets you a big pile of nothing.


No information in your blue print so to speak?

No life, 

pretty simple.


Keep in mind, everything that has ever processed oxygen in order to live on this planet from the dinosaurs to fungi has had "highly complex, coded, sequential information" in it's DNA.

(The blueprint for life.)


So where did the 

INFORFMATION

come from 

and how did it get there?


You already know my thoughts on the issue.


And as far as the 

"materialist/naturalist/reductionist

 perspective being disproven in our lifetime"

is concerned?


Working up a presentation that has to deal with UAP's 

and a whole lot more.

UAP's violate the laws of physics as we know them.


To you and all your 

materialist/naturalist/reductionist 

perspective minded 

cohorts?


Yall might wanna sit up and pay attention.


Your faith-based belief system 

(Something you cant prove 

but believe in anyway)

is being disproven.


Satan's blinders that he has put over the world are in the process of being removed.


2 Corinthians 4:4

in whose case the god of this age (Satan) has blinded the minds of the unbelievers, so that they may not see the light of the gospel of the glory of Christ, who is the image of God.


Yall might wanna ask yourselves,

"Why is that right now?"


Some of us already know the answer.


My dear friend,

Vibrations.

Not electrons.

Hold the key to consciousness.






Tuesday, May 14, 2024

Feels

 



A while back a friend of mine asked:
"Why are you wearing that bracelet?
You're not Jamaican."

I told him it was more Ethiopian that Jamaican :-).

Thats a rock church in Ethiopia BTW.





Information is never the result
 of physical or chemical processes.
Never.






Who is supposed to know?
What those two pictures above are about?
I promise you they know.
And if they forgot?
They will remember here shortly I guarantee you.













New rear tire.
New licenses plate bracket.
New Gloves.
New Helmet.
(With an integrated flip down 
sun visor no less:-)

Did close to 100 miles Sunday.
Most I've done in one day in close to three years.

Its a process, 
getting reacclimated that is.

Happy to be back in the saddle.




Cause everybody gets that 
on their pictures right?


 




Our friend in the bottom right corner.
And that one beam is just screaming 
ROY G BIV.




You might wanna start 
wrapping your head around that.





Hang in there cus.
You got this.
(She knows who she is.)





Any Questions?




Had to
lol.




Need one for the annotations.

The Geneva Bible is one of the most historically significant translations of the Bible into English, preceding the King James Version by 51 years. It was the primary Bible of 16th-century English Protestantism and was used by William Shakespeare, Oliver Cromwell, John Knox, John Donne and others. It was one of the Bibles taken to America on the Mayflower (Pilgrim Hall Museum has collected several Bibles of Mayflower passengers), and its frontispiece inspired Franklin's design for the first Great Seal of the United States.

The Geneva Bible was used by many English Dissenters, and it was still respected by Oliver Cromwell's soldiers at the time of the English Civil War, in the booklet The Souldiers Pocket Bible.

Because the language of the Geneva Bible was more forceful and vigorous, most readers strongly preferred this version to the Great Bible. In the words of Cleland Boyd McAfee, "it drove the Great Bible off the field by sheer power of excellence".

"The annotations, a significant part of the Geneva Bible, were Calvinist and Puritan in character, and as such were disliked by the ruling pro-government Anglicans of the Church of England, as well as by James I, who commissioned the "Authorized Version", or King James Bible, in order to replace it. The Geneva Bible had also motivated the earlier production of the Bishops' Bible under Elizabeth I for the same reason, and the later Rheims–Douai edition by the Catholic community. The Geneva Bible nevertheless remained popular among Puritans and was in widespread use until after the English Civil War. The Geneva notes were surprisingly included in a few editions of the King James Version, as late as 1715.




Did you ever stop
and think:

Why are there so many diversions in this world?

And never so many as we have today right now?

Well there ya go.




"God doesn't want you to suffer."
 is one of the biggest lies 
anybody could ever tell you.

Romans 5:3-5

Not only that, 
but we rejoice in our sufferings
knowing that suffering produces endurance, 
and endurance produces character, 
and character produces hope, 
and hope does not put us to shame, 
because God's love has been poured into our hearts 
through the Holy Spirit who has been given to us.




Our time is here.




AMEN!




"Christianity aint for wimps."

Johnny Cash.




Any questions?




Do it.
Do a media fast.
Turn it all off for a month or two.
A week even.
I guarantee you when you come back?
 You'll be like:
Why did I ever think 
any of this crap ever mattered.?




Somebody please go tell 
Neil deGrasse Tyson et al:

"Standing on the shoulders of giants 
dont make you one."





Wreckage of shot down F-117 Nighthawk stealth fighter

See also: 1999 F-117A shootdown and Lockheed F-117 Nighthawk

The article from The Observer in October 1999 reported that a stealth fighter had been shot down early in the air campaign and that since China lacked stealth technology, they may have been glad to trade with the Yugoslav forces.[85]

In January 2011, the Associated Press via Fox News reported that the unveiled Chinese J-20 may have been developed in part by reverse engineering the U.S. F-117 from parts of the wreckage that were recovered.[86]

In May 2019, BBC News reported that, "It's widely assumed that China did get hold of pieces of the plane to study its technology."[87]





I keep saying,
they are just everywhere.





They dont just
"Not represent Christians"?

They're not Christian.
Period.




Guy lived in the 1200's 
and got more right than all these 
so-called experts do today.

"Self-aware in humans"

Its key.
We are aware of the fact 
that we are aware.

Its what makes us different.

Others mentioned?

Animals plants and minerals?

Not aware of awareness
 if that makes any sense to you..






AMEN!

"can not be bought 
or tamed
by distraction
or cheap pleasure."




Read that again.
Makes it easier to sleep at night as well.




Think about it for a second.




Revelation 5:5

And one of the elders saith unto me, 
Weep not: behold, the Lion of the tribe of Judah, 
the Root of David, 
hath prevailed to open the book, 
and to loose the seven seals thereof.




Its just the biggest bunch of nonsense 
I have ever heard in my life.






Yeah...come explain it to me.
I got  a table and three chairs these days :-).




I love you baby.
TTYS Honey.