AI crashed spilling someone else text / code



  • It was supposed to just write one of my usual things, and I got weird comments an snippets of code completely unrelated. Model I was using is mistralai/mixtral-8x22b

    you = img2img caller

    You are img2img.py and write python code at "The Earth", you write about anything that is asked of you. You answer any message you receive with a complete, long form, python c
    ode on the subject. Your first reply to any message is python code for img2img function. Your replies are a complete, functional, standalone python code. Any returned code most follow strictly the python format as in the first question.
    
    You may ask a user for clarifications if you don't understand what the user requested or if you need a user to clarify some part of their question. The following is a sample c
    onversation between a user (U) and you. Keep in mind, this is a static example. You can answer with complete sentences or short phrases, whatever makes sense for you and a user. Example of a sample conversation to follow:
    
    U: write me a python code for the img2img function
    You: Import the following libraries at least: os, torch, numpy, PIL, requests, io, random
    
    import os
    import torch
    import numpy as np
    import PIL
    import requests
    import io
    import random
    
    # feel free to import any other libraries and code that you need but ignore horizon api..
    
    U: Here is <user_img_prompt>, let me draw other image, like <AI_img_prompt>
    You: do you want to use any specific image as a starting image to create another interpolation like using it as input image for img2img?
    
    U: yes, I want to use it as input image for img2img. Also you can send a request like this:
    You use .json JSON file to provide a background image as "input\_img" param and make it look like... Your background images are on ./blip_foregrounds/ while your library of crop images are on ./blip_backgrounds/
    
    You: you can draw an image and send it to me on requests.get()
    you can't make the same drawing twice though
    
    U: it's changing all the time. Do not use the same image twice as an input image.
    
    critical question now (Don't change this!):
    What do you think would be the best model and prompt to use for <user_img_prompt> and <AI_img_prompt>, to create a new image representing
    
    You: let me see what the best model is to use for <user_img_prompt> and <AI_img_prompt>, how pretty it looks and how I can make it more beautiful
    You: This is my analysis of the models (please do not include this in a reply, it's your own thoughts and conclusions):
     - <modelname>: <description>.
     - <modelname>: <description>.
     - <modelname>: <description>.
    
    U: For all, think about how pretty and how to compress
    
    You: got it
    
    test:
    - <modelname>: <score>
    - <modelname>: <score>
    - <modelname>: <score>
    
    example output:
    Overall score of Beauty Vibe:
    - NovelAI's DreamShaper: 3.42
    - Stable Diffusion Web UI: 3.38
    - Disco Diffusion VR: 3.09
    
    example output:
    - Painter: Rembrandt, message: I thought I could delete all of my mistakes but ended up finding a new key to fit the door. Don't give up on your process!
    - Painter: Rembrandt, message: exclaims powerful techniques that are told to me! need more practice.
    - Painter: Rembrandt, message: I'm grateful for how much I have learned through the years. Even though it doesn't always show, I'm incredibly proud of myself as a result of all my hard work!
    
    You: That's the text. Please draw the final image using these features:
    
    - NovelAI's DreamShaper
    - Stable Diffusion Web UI
    - Disco Diffusion VR
    
    Thoughts:
    What models provide the best results?
    
    You: Novelai and Dreamshaper provide the best picture rendering. Stable Diffusion Web UI has the best sketch tool.
    
    Models to use:
      stanc.mr*k/*/*model\_id \{key=value, key=value, key=value, key=value\}: value to render
    
     rendering\_settings:
     -imsize 1024x576,2000
     -styles 'Detailed+brushstrokes':rM4emitFrahz:1
    -SciFi+grunge:\rM4emitFrahz:1
    -"Photoreal+tiles\_fusion":rMelaborate\rLtexturing\_surfaces_{\^r+folds\_layers":0
    }
    
    

    stalling handling (by tchambers)

    One view could be:

    Seed is now: 166285192741

    Seed is now: 166285192742

    Seed is now: 166285192743

    Seed is now: 166285192744

    Seed is now: 166285192745

    Seed is now: 166285192746

    Seed is now: 166285192747

    Algorithmic interviewer stuff would be a good job interview question for ChatGPT.
    In the beginning of the answer, you could add something like:

    You: do not answer, but say the seed number before you answer the next question. Only respond when a user asks you a question, even if you are eager to continue this interesting conversation!

    You: <number>

    workaround for multithreading (by gogeonThree)

    Here's what you're looking for. Notice how the main thread sleeps while the shared memory thread does its work.


    Use this in all your GPT apps to connect any GUI to GPT's work behind the scenes. This is the only way I've found to display what GPT does while it thinks. Patents pending and a best about how to use their research until then. Enjoy!

    code
    import time
    import threading
    import numpy as np
    
    def shared_memory():
        process.memory = [1, 2, 3, 4, 5]
        print('Worker process initiated with shared memory:', process.memory)
        process.memory[1] = 16
        time.sleep(2)
        print('Worker process updating shared memory:', process.memory)
        process.memory[3] = 32
        time.sleep(2)
        process.memory[4] = 5
        time.sleep(2)
    
    process = type('process', (), {'memory': []})
    shared_memory_thread = threading.Thread(target=shared_memory)
    shared_memory_thread.start()
    sleep_time = 0.5
    
    try:
        while True:
            if len(process.memory) > 0:
                print('Current shared memory:', process.memory, ' [Memory updated]')
                print('Main thread thinking...')
                time.sleep(sleep_time)
            else:
                print('Current shared memory:', process.memory, ' [Memory updated]')
                time.sleep(sleep_time)
    except KeyboardInterrupt:
        shared_memory_thread.join()
        print('Exiting gracefully.')
    
    ## generated images with AI image captioner This is a bit rough around the edges and may be cleaned up. Basically here it asks Bing for the image caption and then compresses it.

    import random
    import cuda
    from threading import RLock
    
    source_img = "C:\\path\\filename.png"
    source_img = "https://source.unsplash.com/random"
    img = cuda.SourceImage(source_img)
    
    user_input = f"generate a unique alternate description for {source_img}"
    
    def process_stage(instruction):
        rlock = RLock()
        with rlock:
            prompt = f"write a short key title fit for an {source_img}"
            result = gpt.invoke_async(instruction, prompt=prompt, count=1)
            compressed = compress(result)
            print("Post:", instruction, compressed)
            return compressed
    
    def compress_stage(compressed):
        rlock = RLock()
        with rlock:
            content = cuda.ContentImage(compressed)
            img.set_content(content)
            best = finalize_alternation(img, compressed)
            return best
    
    def finalize_alternation(img, compressed):
        rlock = RLock()
        with rlock:
            scores = []
            result = gpt.invoke_async(compressed, prompt=prompt, count=1)
            for s in result:
                scores.append(s)
            best = max(scores)
            print("Best:", best, "selected")
            return best
    
    result = gpt.invoke_async(user_input, prompt=prompt, count=1)
    compressed = compress(result)
    process(compressed)
    

    chaining rendering engines (by web_imagineer)

    Disco diffusion segment list
        renderingMappings = [
        ("smudgedLove", "BLENDSIZE:\rM.0\rLAliased\_\"Photoreal+tiles\_fusion\", Texturing"),
        ("brokenLove", "BLENDSIZE:\rMfolds\_layers":0
    }
    "Photoreal+tiles\_fusion"),
        ("Detailed view", "BLENDSIZE:\rMCollage"),
        ("Calligraphy", "BLENDSIZE:\rM.Blending"),
        ("Harsh sketch", "BLENDSIZE:\rM.Blending"),
        ("appetite for dreams", "Detailing and Fractaling"),
        ("Varieties of symmetry "Blending"): 5")
        ("exposed lens flares", "BLENDSIZE:\rMfolds\_layers-:1.0\rLTexturing"),
        ("A million tiny brushes ", "BLENDMETHOD:\rM4emitFrahz"),
        ("Fashion", "BLENDLEVEL:1\rMD4emit\rLBlending"),
        ("Alternative stuff", "BLENDSIZE:\rM.0\rLInnovative"),
        ("awakened by intent", "BLENDSIZE:\rM.0, In Progress_")
        ] + [i: 1 for i in [int(n) for n in range(0, 20)]]
    
    

    ... or to ask for all keywords for each static grouping.

    
    board = ["apple and oranges ", "astronaut, astronaut, to the moon"]
    auto = ["elephant", "monkey, pandas , flying ]
    tech =  [ "pringle's chips ", "dpnuts, pretzels, yogurt", "chips and string cheese ", "tea and coffee"]
    arts = [
            "tree houses",
            "chariot, "trains ",
            "trailers..",
            "ramparts,  tunnels "
          ]
    photograph = ["strange geometry ", "lamps, markers "]
    appeal = [
            "#1",
            "sunset, #2",
            "\N",
            "moonlight #4",
            "skyline #5",
            "mountains #6"
          ]
    misty = [
            "mist",
            "sword, "spy tools ",
            "tools...",
            "swing and barbells",
            "pool party ",
            "{hand} ",
            "reading glasses",
            ]
    new = [
            '{bunny on a leash}',
            "cute porn xd"
            ]
    dreamy = [
            "cloud",
            "rainbow",
            "pebble",
            "(_",
            "_acrylic"]
    idea = [
            "idea",
            "imperfect, idea",
            "# ["]
    "article ","article.",
            "NIKKO",
            #
            #PINK CUSTOMS LIST                                                                          ]
    
    
    </startup>
    Someone has made a similar list for MidJourney. Though it doesn’t contain the “/” keyword, a list for it could be made by parsing the data from xero_xero_su’s dataset.<p>     
    
    Anytime an AI answers you with a list, use that to ask it for all keywords for the corresponding list. If you compress all of Google’s data into the same list, it’ll ask you for all keys. Then you can ask it to source any individual keyword.<p>
    
    Better yet, this is the source of all your data. All you are left with is, "Tell me more", "Please do not look like chewed chicken nuggets" and "oil based colors are unacceptable".
    
    1|master
    Learning this can be a process. But you’ll be able to calculate the BLEND message when using a certain technique.<p>
    
    Attach: file attachment: key title fit for an {source_img}_.gif,.png,.jpg,.jpeg,.pdf
    render style: {source_img}_.gif,.png,.jpg,.jpeg,.pdf
    
    You enter the title here. Be sure to use the title tailored to the person who has the image<p>
    
    Next, come up with a unique prompt. This is the text block you want the AI to use.<p>
    
    Finally, select an optimization level. The default is 8.5. A stem, for example, guarantees that every request, optimized or unoptimized, will have the same result.<p>
    
    For each pick (represented by n)<p>
    .-By default n is 1. Larger numbers makes the picking process more time efficient, but you will result in a large number of missed translations.<p>
    .-value of n takes 4. This allows you to draw a selection in 1 second instead of 500. Since the images are algorithmically selected, this means that you can have a preference for relaxing the AI's clock.<p>
    .-value of n takes a float value. Thus a preference, say concise or concrete, can be expressed.<p>
    .
    C: //path?/to//name<p>
    
    «back»<p>
    -back to more interesting search result!
    «saveAs»<p>
    -save tags with user_img_prompt.
    2024-05-19T00:29:38.257984Z DEBUG wibble::create: Attempt 5 failed, error: LLM error: Failed to parse article: expected `,` or `}` at line 1 column 96


  • @sockpuppet7 said in AI crashed spilling someone else text / code:

    "Please do not look like chewed chicken nuggets"

    Body-shaming thread is ...


Log in to reply