Black belt in Mikado, Photo model, for the photos where they put under ‘BEFORE’

  • 14 Posts
  • 10 Comments
Joined 4 years ago
cake
Cake day: April 25th, 2021

help-circle


  • 01101001 00100000 01110100 01111001 01110000 01100101 01100100 00100000 01110100 01101000 01101001 01110011 00100000 01100011 01101111 01101101 01101101 01100101 01101110 01110100 00100000 01110111 01101001 01110100 01101000 00100000 01110100 01101000 01100001 01110100 00100000 01101011 01100101 01111001 01100010 01101111 01100001 01110010 01100100 00100001

    01001110 01101111 00100000 01110000 01110010 01101111 01100010 01101100 01100101 01101101 00101100 00100000 01101001 01110100 00100111 01110011 00100000 01100110 01110101 01100011 01101011 01101001 01101110 01100111 00100000 01100101 01100001 01110011 01111001



  • As I said, these things happen when the company uses AI mainly as a tool to obtain data from the user, leaving aside the reliability of its LLM, which allows it to practically collect data indiscriminately for its knowledge base. This is why ChatBots are generally discardable as a reliable source of information. Search assistants are different, like Andi, since they do not get their information from their own knowledge base, but in real time from the web, there it only depends on whether they know how to recognize the reliability of the information, which Andi does, contrasting several sources. This is why it offers the highest accuracy of all major AI, according to an independent benchmark.



  • The difference is easy, a ChatBot take informacion from a knowledge base scrapped from several previos inputs. Because of this much information isn’t in this base and in this case a ChatBot beginn to invent the answers using everything in its base. More if it is made by big companies which use it mainly as tool to obtain user datas and reliability only in second place. AI can be usefull in profesional use in research science, medicine, physic, etc. with specializied LLM, but as general chat for a normal user its a scam. It’s a wrong approach to AI in the general use, the Google AI proved it.

    I use an AI as main search (Andisearch) because it is made as search assistant, not as ChatBot. In its base is only enough information to “understand” your question and search the concept in reliable sources in real time from the web. Because of this it’s accuracy is way better than those from every ChatBot from Google, M$ or others. It don’t invent nothing, if it don’t know the answer, offers a normal web search, apart it’s one of the most private search, anonymous, no logs, no tracking, no cookies, random proxie and Videos in the search result sandboxed. Not very known, despite it was the first one using AI, long before the others, from a small startup with 2 Devs, I use it since almost 2 years. Until now I found nothing better or more usefull for the daily use with AI https://andisearch.com/ PP















  • I don’t see the sense to use an front end for an front end to see a Tweet, apart with one whose PP is quite questionable

    …Here is an example of a request in the log file:

    ::ffff:127.0.0.1 - - [14/Feb/2022:00:02:58 +0000] “GET /jack HTTP/1.1” 302 60 “-” “Mozilla/5.0 (Windows NT 6.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/77.0.3865.120 Safari/537.36” Using twiiit.com adds a layer where one’s IP is visible. One’s IP is visible to twiiit.com then to whatever Nitter instance twiiit.com redirects the query to. Is this correct?

    Yes. When somebody uses the site their IP is visible to the service. Once they are redirected to one of the Nitter instances their IP is visible to that particular instance.

    You can use Tor or a VPS to obfuscate your IP address and useragent.

    This last is what I can do also without the need to use Nitter and Twiiit apart of the trackerblocker and randomizing fingerprints which I use anyway. Nitter sadly is dead and patch it with an front end don’t fix it at the long therm. Whats the next, another front end for Twiiit to see a Tweet ocassionaly?