Submitted by drinkingsomuchcoffee t3_113m1ly in MachineLearning
baffo32 t1_j8vrc4d wrote
HuggingFace recently implemented a PEFT library that reimplements the core functionality of AdapterHub. AdapterHub had reached out to them to contribute and integrate work but this failed in February of last year ( https://github.com/adapter-hub/adapter-transformers/issues/65#issuecomment-1031983053 ). Hugging Face was asked how the work related to the old and it was so sad to see they had done it completely independently, completely ignoring the past outreach ( https://github.com/huggingface/peft/issues/92#issuecomment-1431227939 ). The reply reads to me as if they are implementing the same featureset, unaware that it is the same one.
I would like to know why this didn‘t go better. The person who spearheaded AdapterHub for years appears to be one of the most prominent PEFT researchers with published papers. It looks as if they are tossed out in the snow. I can only imagine management never learned of the outreach or equally likely they have no idea how to work with other projects to refactor concepts from multiple codebases together or don’t find it to be a good thing to do so. It would have been nice to at least see lip service paid.
The library and hub are not complex. Is there a community alternative conducive to code organization or do we need to start yet another?
Sometimes I think it would make sense to train language models to transform the code, organize it, merge things, using techniques like langchain and chatgpt, to integrate future work into a more organized system.
Projects where everyone can work together are best.
Viewing a single comment thread. View all comments