And if it's that, then are you suggesting to not implement a certain technological efficiency tool in order to keep (now clearly redundant) jobs? That has never worked long-term in the history of mankind, AFAIK.
It's an ad post about memgraph.
This is such an overkill for that kind of data. Even if they do plan to "scale up significantly", I doubt that they'll actually experience any benefit of graph db.
To add more context, Memgraph Enterprise pricing is explained under https://memgraph.com/pricing: "Starting at $25,000 per year for 16 GB, Memgraph has an all-inclusive, simple pricing model that scales with your workload without restrictions. No charge for compute. No charge for replicas. No charge for algorithms. No Surprises.".
In addition, Memgraph Community is free (standard BSL license, which turns into Apache2 4 years after release date, https://github.com/memgraph/memgraph/blob/master/licenses/BS...), and it has many features that are usually considered enterprise (users, replication, not a single degradation in performance or scale, etc.).
Please elaborate more about why the pricing seems expensive, or put it into the infra-cost perspective :pray:
To be fair, it is quite nice for the pricing to be transparent. And I think it's somewhat competitive w.r.t. Stardog, for example. The community version is less restricted than Ontotext, for example.
In the relational space, it took OSS options like Postgres many decades (and somehow paid-for person-years) to get to a place where enterprises seriously consider migrating off Oracle to it.
This is an absurd claim.
> Extracted Skills from Team Resumes
> Extracted Skills
> Subject Matter Experts Finder Question: designed to identify employees with expertise in specific domains or mission-critical capabilities.
I can't think of anything that screams "incompetent management" more than this. So, to find a subject matter expert, you're going to "extract skills" and "extract resumes" to answer abstract questions about your staff... without ever once.. just _talking_ to your staff?
What a cold and bizarre future these people think we want to live in.
Meanwhile can we use technology to improve the level of connectivity I have and experience as an employee? Can you please stop asking LLMs to "extract" things about me into goofy automated pipelines? If you want skilled workers then you need to demonstrate skilled management. This is all the exact opposite of that.
What's wrong with attempting to better understand a given organization using LLMs or any other tech? Ofc, great managers will try as hard as possible to talk face to face as much as possible.
I highly doubt the difference between current staff management and adding this thin layer is equivalent to difference between a bike and a rocket. It's more like saying "we get to the moon just fine, but if we strap this extra booster on, we will get there 2% faster than before but with all kinds of additional risks to the payload!"
> What's wrong with attempting to better understand a given organization
You can alienate your employees and lose your skill base as a result. I'd like to be evaluated based on upon my work and dedication, not what some LLM thinks it sees in my resume. I've worked for my current company for 17 years. My resume contains none of that work or any skills gained in that time.
I also like to take on new challenges and learn new skills. The LLMs "extractions" cannot see this or attend to it.
> Ofc, great managers will try as hard as possible to talk face to face as much as possible.
That's not the problem being discussed here. The question is "can we use technology to make better organizational decisions particularly when it comes to the efficient use of human resources." If I have a bad boss, I'm going to quit, and you'll never even have this opportunity. If I have a good boss, and you interfere with his decisions using LLM driven logic, I'm going to quit, and you're never going to get the benefit of that labor anyways.
NASA apparently has ~18k employees, it seems like it might be useful to be able to query "Who at NASA has X, Y, Z skills that can help us with this project." Then you can speak to some of those people face-to-face. It won't be perfect but certainly sounds like a useful tool in principle.
> Ever wondered how NASA identifies its top experts, forms high-performing teams, and plans for the skills of tomorrow?
Here’s another resource on that https://appel.nasa.gov/2010/02/18/aa_2-7_f_nasa_teams-html/ the book “How NASA Builds Teams: Mission Critical Soft Skills for Scientists, Engineers, and Project Teams”
That is tiny even by historical standards. I was expecting there to be some type of technology here. Why is this interesting?
I have two major issues with virtually all graph DBMSs that are not RDF/SPARQL-based:
1) They do not allow structure-preserving querying. That is, I query a graph and want the results to be a smaller graph. This is trivial in SQL, you just 'SELECT * FROM x WHERE ...' and the result set you get is tabular just like the table x. In SPARQL, there are a CONSTRUCT/DESCRIBE queries that do just that - give you the results as a graph.
2) They don't use any (internationally recognized) standard to represent graph data. RDF is the only such format known to me (ignore all the semantic web stuff associated with it and just consider the format).
230k edges is peanuts for a graph db. It's like when the number of rows times columns in your SQL DB is 230k. NASA could (should?) have just used Oxigraph, RDF4J, or Jena. Stardog and Ontotext are the paid options. However, it is quite nice to see more interest in graph-based DBMSs in general!
> “Which employees have cross-disciplinary expertise in AI/ML?”
Regarding the study itself, I did not understand who is the target user of this. I would rather be more interested in the Lessons Learned 2.0 study (I understand it was attempted once before [1]). I don't think the study at hand would be able to correctly answer questions about expertise.
On the technical side, as far as I understand, the cosine similarity was computed per triplet? In that case, I could see how pgvector could be used for this. Relevance expansion is the only thing in the article that made me think that it would be cool if it works well. But I could see how in a combo of a regular RDF DBMS + pgvector, one could first do a cosine similarity query via pgvector and then compute an (S)CBD [2] of the subject (the from node) of the triplet.
[1]: https://youtu.be/QEBVoultYJg?t=1653
[2]: https://patterns.dataincubator.org/book/bounded-description....
Why? because there is never a reward attached. Oh you want to make me the AI resource for the agency but not remove former duties or increase my pay? Ummmm no thanks. Also things tend to happen in waves ie "AI" so everyone needs a lot from a very few people at the same time. No one ever asks how those people can be empowered. Just how can we put the screws to them so they work harder.
HR and Mgmt can f-off with their "skill resource bank" or whatever nonsense they call it this year. My skills are what I was hired for on the job description. If you want to discuss a new position or higher pay for different skills I'm very happy to talk about how I can work with the org to make that happen. Thats never the case though.
But doesn't actually show the thing.
That's AI hypecycle signal for probably bullshit/defective thing.
rkwz•9h ago