neuronexmachina
neuronexmachina t1_j9zrts6 wrote
Reply to After a Decade of Tracking Politicians’ Deleted Tweets, Politwoops Is No More by psychothumbs
Has anyone run the numbers on how much running something like politiwoops would cost per month under Twitter's new API pricing?
neuronexmachina t1_j95cntq wrote
Reply to comment by ActuatorMaterial2846 in UN says AI poses 'serious risk' for human rights by Circlemadeeverything
>However, my concern is the sophistication of the nueral networks that are no doubt classified, most definitely in the hands of government and military.
Makes me wonder how adept the massively-parallel machines the NSA uses to crack encryption are when repurposed for training LLMs and other neural nets.
Or heck, if they secretly have a functioning quantum computer, there's probably some pretty crazy capabilities when combined with transformers/etc.
(I had a link to an article about quantum transformers, but the auto-mod ate it)
neuronexmachina t1_j8elwcz wrote
Reply to comment by JurassicCotyledon in A study in the US has found, compared to unvaccinated people, protection from the risk of dying from COVID during the six-month omicron wave for folks who had two doses of an mRNA vaccine was 42% for 40- to 59-year-olds; 27% for 60- to 79-year-olds; and 46% for people 80 and older. by Wagamaga
How would you ethically test for the effectiveness of a vaccine in blocking transmission?
neuronexmachina t1_j867ome wrote
Reply to Scientists Made a Mind-Bending Discovery About How AI Actually Works | "The concept is easier to understand if you imagine it as a Matryoshka-esque computer-inside-a-computer scenario." by Tao_Dragon
Link to MIT summary of study: Solving a machine-learning mystery: A new study shows how large language models like GPT-3 can learn a new task from just a few examples, without the need for any new training data.
Actual preprint and abstract: What learning algorithm is in-context learning? Investigations with linear models
>Neural sequence models, especially transformers, exhibit a remarkable capacity for in-context learning. They can construct new predictors from sequences of labeled examples (x,f(x)) presented in the input without further parameter updates. We investigate the hypothesis that transformer-based in-context learners implement standard learning algorithms implicitly, by encoding smaller models in their activations, and updating these implicit models as new examples appear in the context. Using linear regression as a prototypical problem, we offer three sources of evidence for this hypothesis. First, we prove by construction that transformers can implement learning algorithms for linear models based on gradient descent and closed-form ridge regression. Second, we show that trained in-context learners closely match the predictors computed by gradient descent, ridge regression, and exact least-squares regression, transitioning between different predictors as transformer depth and dataset noise vary, and converging to Bayesian estimators for large widths and depths. Third, we present preliminary evidence that in-context learners share algorithmic features with these predictors: learners' late layers non-linearly encode weight vectors and moment matrices. These results suggest that in-context learning is understandable in algorithmic terms, and that (at least in the linear case) learners may rediscover standard estimation algorithms. Code and reference implementations are released at this https URL.
neuronexmachina t1_j63etuk wrote
A few days ago Rep. Ted Lieu also used ChatGPT to write the first paragraph of his opinion piece in the NYT about AI:
>Imagine a world where autonomous weapons roam the streets, decisions about your life are made by AI systems that perpetuate societal biases and hackers use AI to launch devastating cyberattacks. This dystopian future may sound like science fiction, but the truth is that without proper regulations for the development and deployment of Artificial Intelligence (AI), it could become a reality. The rapid advancements in AI technology have made it clear that the time to act is now to ensure that AI is used in ways that are safe, ethical and beneficial for society. Failure to do so could lead to a future where the risks of AI far outweigh its benefits.
> I didn’t write the above paragraph. It was generated in a few seconds by an A.I. program called ChatGPT, which is available on the internet. I simply logged into the program and entered the following prompt: “Write an attention grabbing first paragraph of an Op-Ed on why artificial intelligence should be regulated.”
>I was surprised at how ChatGPT effectively drafted a compelling argument that reflected my views on A.I., and so quickly. As one of just three members of Congress with a computer science degree, I am enthralled by A.I. and excited about the incredible ways it will continue to advance society. And as a member of Congress, I am freaked out by A.I., specifically A.I. that is left unchecked and unregulated. ....
neuronexmachina t1_j6189hm wrote
Reply to comment by pab_guy in U.S. GDP rose 2.9% in the fourth quarter, more than expected even as recession fears loom by Intrepid-Astronaut41
It's real GDP, which adjusts for inflation. Without factoring inflation the growth would be 6.5%: https://www.bea.gov/news/2023/gross-domestic-product-fourth-quarter-and-year-2022-advance-estimate
> (GDP) increased at an annual rate of 2.9 percent in the fourth quarter of 2022 (table 1), according to the "advance" estimate released by the Bureau of Economic Analysis. In the third quarter, real GDP increased 3.2 percent.
> ... Current‑dollar GDP increased 6.5 percent at an annual rate, or $408.6 billion, in the fourth quarter to a level of $26.13 trillion. In the third quarter, GDP increased 7.7 percent, or $475.4 billion (tables 1 and 3).
neuronexmachina t1_j4wkojl wrote
Reply to comment by AntifaDoesntExist in Microsoft to cut thousands of jobs - Sky News by Familiar-Turtle
Do you have a source for that? According to this they haven't had layoffs yet, but it's widely expected they'll announce them in the near future:
>Although several major companies across the tech industry have announced mass layoffs as rising inflation and uncertain macroeconomic conditions fuel recession fears, Alphabet Inc., the parent company of search giant Google, has been obviating this undesirable. The tech behemoth is yet to announce any major layoff, despite several reports of upcoming job cuts inside the company.
However, Alphabet employees will not be immune to this industry-wide layoffs for too long. Employees at Google anticipate an imminent announcement of a huge layoffs, according to reports.
In November, The Information had reported that Alphabet was planning to lay off nearly 10,000 low performing employees starting early 2023 in order to curb costs.
Slumping ad market has weighed heavily on Google's bottom line, which in turn has made investors urge the company to seek avenues to cut expenses. Executives at Alphabet are seemingly more focused to cut down costs of projects and make the company more efficient.
neuronexmachina t1_j4c2poz wrote
Reply to comment by Wizywig in NASA feared Oracle audit, overpaid $15m for software by CrankyBear
Interesting comment from last month's HackerNews thread about IvorySQL:
>It’s also likely in violation of arcane copyright law. The oracle wire protocol includes a handshake procedure that sends a poem in one of the initial messages. It’s untested legal theory whether copying that poem in a new work (e.g. a new driver or compatability layer) would violate Oracle’s copyright on the poem.
neuronexmachina t1_j30nhva wrote
Reply to comment by TotalWarspammer in The study concluded that post-COVID-19 syndrome was detected in 25% of the included participants. COVID-19 hospitalization, initial symptomatic COVID-19, and female sex were significant risk factors for developing post-COVID-19 syndrome. by bo_hossuin_14
Vaccination also lowers the overall chance of long-Covid, and it looks like vaccination might even help treat the ongoing symptoms of long-Covid: https://www.thelancet.com/journals/eclinm/article/PIIS2589-5370(22)00354-6/fulltext
neuronexmachina t1_j2ntanb wrote
Reply to An analysis of data from 30 survey projects spanning 137 countries found that 75% of people in liberal democracies hold a negative view of China, and 87% hold a negative view of Russia. However, for the rest of the world, 70% feel positively towards China, and 66% feel positively towards Russia. by glawgii
I thought this part of the paper's conclusion was pretty interesting:
>Yet today’s geopolitical divide does not depend upon historical ties or cultural affinity. Rather, it finds its basis within politics and political ideology: namely, whether regimes are democratic or authoritarian, and whether societies are liberal or illiberal in their fundamental view of life. In the first category are maritime societies based on trade, the free flow of peoples and ideas, and the protection of individual rights: in this grouping we find the countries of western Europe, the settler societies of both North and South America and Australasia, as well as high-income insular democracies in North Pacific Asia. By contrast, the second cate- gory is comprised of historically land-based, continental empires: Iran, Russia, Central Asia, China, and the Arab Middle East. In that sense, comparisons with the Cold War are not entirely mistaken. For even though this latter grouping spans the full range of political in- stitutions and ideologies – from Islamism to secular communism, and from traditionalist monarchism to mass movement populism – they are united in their rejection of western modernity, and its associated political and social alternative.
neuronexmachina t1_j2nszw7 wrote
Reply to comment by starfish42134 in An analysis of data from 30 survey projects spanning 137 countries found that 75% of people in liberal democracies hold a negative view of China, and 87% hold a negative view of Russia. However, for the rest of the world, 70% feel positively towards China, and 66% feel positively towards Russia. by glawgii
Figure 6 of the publication basically shows that for China vs US favorability. Also figures 16 and 18.
neuronexmachina t1_j1nsyxt wrote
Reply to comment by TheDeadlySpaceman in Shiva is back and begging for a job! by somegridplayer
I think they were married 2014-2016, and I get the impression he didn't really start going too crazy until 2015/2016. He was actually fairly successful before then, e.g. was awarded a Fulbright and had a decent tech startup.
neuronexmachina t1_j1km7f9 wrote
I believe this builds on some of the Caltech-based research group's prior work on detecting online trolling, e.g. Finding Social Media Trolls: Dynamic Keyword Selection Methods for Rapidly-Evolving Online Debates
>Online harassment is a significant social problem. Prevention of online harassment requires rapid detection of harassing, offensive, and negative social media posts. In this paper, we propose the use of word embedding models to identify offensive and harassing social media messages in two aspects: detecting fast-changing topics for more effective data collection and representing word semantics in different domains. We demonstrate with preliminary results that using the GloVe (Global Vectors for Word Representation) model facilitates the discovery of new and relevant keywords to use for data collection and trolling detection. Our paper concludes with a discussion of a research agenda to further develop and test word embedding models for identification of social media harassment and trolling.
neuronexmachina t1_j1dvgip wrote
Reply to comment by culturedgoat in Meta settles Cambridge Analytica class-action lawsuit for $725 million / The company gained access to the personal information of millions of Facebook users by Sorin61
Part of the problem is that FB's API at the time not only allowed an app access to a user's data, but also a lot of the personal data for their FB friends as well. That's how CA got access to data for 50M users -- there certainly weren't 50M users of their app.
neuronexmachina t1_j1d7w1y wrote
Reply to comment by cemyl95 in TikTok Spied On Forbes Journalists - ByteDance confirmed it used TikTok to monitor journalists’ physical location using their IP addresses by BasedSweet
I assume the goal was to narrow down the list of potential leakers, which IP addresses would be useful for. Regarding your hotel chain example, they could just perform a reverse lookup to see it's an IP belonging to a hotel chain, and weight the information accordingly along with other information they have about their employees.
Also, the article doesn't mention this, but checking the Google Play Store and Apple App Store entries for TikTok, it looks like location data is part of what the app has access to.
neuronexmachina t1_j1ccq4v wrote
Reply to comment by cemyl95 in TikTok Spied On Forbes Journalists - ByteDance confirmed it used TikTok to monitor journalists’ physical location using their IP addresses by BasedSweet
IP addresses could definitely be used to figure out if a journalist was connected to the same wifi network as a ByteDance employee, though:
>An internal investigation by ByteDance, the parent company of video-sharing platform TikTok, found that employees tracked multiple journalists covering the company, improperly gaining access to their IP addresses and user data in an attempt to identify whether they had been in the same locales as ByteDance employees.
neuronexmachina t1_j1brl7i wrote
Reply to comment by fgtrtd007 in Google tells employees more of them will be at risk for low performance ratings next year by Sorin61
I hear that a lot, but what would be an example of a search query that gives a worse result now than it did in the past, or that Bing/etc does better on?
neuronexmachina t1_j12c9c3 wrote
Reply to comment by UX-Edu in OpenAI releases Point-E, an AI that generates 3D models by Shelfrock77
Not from OpenAI, but do you mean something like this?
neuronexmachina t1_iqsv7xi wrote
Reply to comment by trendymoniker in [D] Types of Machine Learning Papers by Lost-Parfait568
Yeah, pretty sure right-most on second row is Fei-Fei.
neuronexmachina t1_jcax06f wrote
Reply to comment by Rohit901 in OpenAI releases GPT-4, a multimodal AI that it claims is state-of-the-art by donnygel
2017 transformers paper for reference: "Attention is all you need" (cited 68K times)
>The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.0 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data