11+ Transformer Jobs in Pune | Transformer Job openings in Pune
Apply to 11+ Transformer Jobs in Pune on CutShort.io. Explore the latest Transformer Job opportunities across top companies like Google, Amazon & Adobe.
at Phonologies (India) Private Limited
Job Description
Phonologies is seeking a Senior Data Engineer to lead data engineering efforts for developing and deploying generative AI and large language models (LLMs). The ideal candidate will excel in building data pipelines, fine-tuning models, and optimizing infrastructure to support scalable AI systems for enterprise applications.
Role & Responsibilities
- Data Pipeline Management: Design and manage pipelines for AI model training, ensuring efficient data ingestion, storage, and transformation for real-time deployment.
- LLM Fine-Tuning & Model Lifecycle: Fine-tune LLMs on domain-specific data, and oversee the model lifecycle using tools like MLFlow and Weights & Biases.
- Scalable Infrastructure: Optimize infrastructure for large-scale data processing and real-time LLM performance, leveraging containerization and orchestration in hybrid/cloud environments.
- Data Management: Ensure data quality, security, and compliance, with workflows for handling sensitive and proprietary datasets.
- Continuous Improvement & MLOps: Apply MLOps/LLMOps practices for automation, versioning, and lifecycle management, while refining tools and processes for scalability and performance.
- Collaboration: Work with data scientists, engineers, and product teams to integrate AI solutions and communicate technical capabilities to business stakeholders.
Preferred Candidate Profile
- Experience: 5+ years in data engineering, focusing on AI/ML infrastructure, LLM fine-tuning, and deployment.
- Technical Skills: Advanced proficiency in Python, SQL, and distributed data tools.
- Model Management: Hands-on experience with MLFlow, Weights & Biases, and model lifecycle management.
- AI & NLP Expertise: Familiarity with LLMs (e.g., GPT, BERT) and NLP frameworks like Hugging Face Transformers.
- Cloud & Infrastructure: Strong skills with AWS, Azure, Google Cloud, Docker, and Kubernetes.
- MLOps/LLMOps: Expertise in versioning, CI/CD, and automating AI workflows.
- Collaboration & Communication: Proven ability to work with cross-functional teams and explain technical concepts to non-technical stakeholders.
- Education: Degree in Computer Science, Data Engineering, or related field.
Perks and Benefits
- Competitive Compensation: INR 20L to 30L per year.
- Innovative Work Environment for Personal Growth: Work with cutting-edge AI and data engineering tools in a collaborative setting, for continuous learning in data engineering and AI.
We are seeking a result-driven Sales Executive to join our team in Pune. This full-time position requires engaging in B2B transactions and meeting monthly sales targets. The ideal candidate will possess strong communication skills and a willingness to visit customers as needed.
Key Responsibilities
· Conduct B2B sales transactions and achieve monthly sales targets.
· Build and maintain strong relationships with clients through effective communication.
· Understand customer needs and provide tailored solutions based on engineering products.
· Actively seek out new sales opportunities through networking and customer visits.
· Collaborate with team members to enhance overall customer satisfaction.
· Manage sales pressure effectively while maintaining a positive attitude.
Requirements
· Education: any graduate with relevant experience , btech & mba will get preference
· Experience: 2-7 years of sales experience, particularly in B2B transactions.
· Skills:
1. Strong interpersonal communication and customer service skills.
2. Ability to understand engineering concepts and products.
3. Proven track record of achieving sales targets.
4. Capability to work under pressure and handle challenging situations effectively.
Responsibilities:
- Delivering projects on time based on roadmap requirements
- Overriding technical architecture of the product
- Managing the Development Team including workloads, setting and managing targets, mentoring and training members of the team
- Writing technical specifications and managing projects through to completion
- Implementing and reviewing all development processes (coding standards, code review, code complexity)
- Identifying and resolving issues within the product
- Implementing and managing suitable monitoring systems for production platforms
- Monitoring costs across the AWS platform
Requirements:
- Be highly experienced in the following technologies: Java, Node Js, Typescript, React, SQL
- Show preferable experience in Android development, Kotlin, Mongo DB, MQTT
- Demonstrate fluent English speaking, reading, writing
- Excellent communication skills
- Have previous AWS Cloud experience
- Have experience in leading a Development Team, mentoring and training colleagues
- Demonstrate experience in managing projects through an agile environment, ideally Jira or Azure Devops
1. Good communications skills
2. creative thinking
3. basic Canva designing
4. social media fluency
5. Basic content writing
6. curiosity
7. commitment to the role
The idea is simple, if you can commit to an internship that uses your skills and teaches you more, while you have fun... apply, and let us take from there!
YOUR ‘OKR’ SUMMARY
OKR means Objective and Key Results.
As a Software Engineer at Coredge, you will help develop our next-generation cloud native core solution along with the product and the open-source community to build the Coredge.io vision.
What you will do?
- System engineering and implementation in Python and Golang.
- Working on performance issues using creative experiments and internally developed product features.
- Research, propose, and integrate relevant open-source projects based on product objectives.
- Write organized, efficient, and well documented Python/Golang code as an example for junior engineers.
- Participation in all levels of product definition, design, implementation, testing, and deployment.
- Must include the ability to discuss abstract system architectures from ideas through implementation and
- creatively apply domain experience to solve technical challenges.
- Mentoring software engineers, fostering an environment of trust and accountability.
What you will need?
- A strong sense of ownership, urgency, and drive. As an integral part of the development team, you will need the
- following skills to succeed.
- Strong Python skills to develop framework(s).
- Hands-on to design & develop re-usable framework components.
- Hands-On experience on developing framework, designing re-usable framework components.
- Experience in engineering practices such as code refactoring, design patterns, design driven
- development, Continuous Integration, building highly scalable applications, application security and functional programming.
Additional Skills:
- Knowledge of Cloud native would be an advantage.
- Understanding of Kubernetes from Architecture side and also understand the Standard API.
- Code contributed to CNCF or similar community will be plus.
- Performance benchmarking of K8’s or any cloud will be added advantage.
Additional Advantage:
- Deep understanding of technology and passionate about what you do.
- Background in designing high performant scalable software systems with strong focus to optimize
- hardware cost.
- Solid collaborative and interpersonal skills, specifically a proven ability to effectively guide and influence within a dynamic environment.
- Strong commitment to get the most performance out of a system being worked on.
- Prior development of a large software project using service-oriented architecture operating with real time constraints.
What's In It for You?
- You will get a chance to work on cloud-native and hyper-scale products
- You will be working with industry leaders in cloud.
- You can expect a steep learning curve.
- You will get the experience of solving real time problems, eventually you become a problem solver.
Benefits & Perks:
- Competitive Salary
- Health Insurance
- Open Learning - 100% Reimbursement for online technical courses.
- Fast Growth - opportunities to grow quickly and surely
- Creative Freedom + Flat hierarchy
- Sponsorship to all those employees who represent company in events and meet ups.
- Flexible working hours
- 5 days week
- Hybrid Working model (Office and WFH)
Our Hiring Process:
- Candidates for this position can expect the hiring process as follows (subject to successful clearing of every round)
- Initial Resume screening call with our Recruiting team
- Next, candidates will be invited to solve coding exercises.
- Next, candidates will be invited for first technical interview
- Next, candidates will be invited for final technical interview
- Finally, candidates will be invited for Culture Plus interview with HR
- Candidates may be asked to interview with the Leadership team
- Successful candidates will subsequently be made an offer via email
- As always, the interviews and screening call will be conducted via a mix of telephonic and video call.
So, if you are looking at an opportunity to really make a difference- make it with us…
Coredge.io provides equal employment opportunities to all employees and applicants for employment and prohibits
discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability
status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other
characteristic protected by applicable central, state or local laws.
- Working days: 6 Days (Alternate Saturday are off)
- Timing: Normal day shift
- Experience: 1-3 Years
- Budget: (Depends on the candidature)
Job Description :
Should know html/CSS/PHP/Node/javascript/bootstrap
Location- Permanent Work from home
Immediate joiners preferred
Note:- People who want to restart their career may also apply.
PS : The candidate should be able to communicate well
About Company:
It is not just a technology company. We are a solutions company that uses technology as an enabler. We offer predictable delivery time and predictable cost with a focus on your outcome and needs. We do this by making otherwise unpredictable nature of transformational journey, a predictable one.
We believe in recognizing & sharing failures early in order to deliver with reduce leakages. With this philosophy and our unique ability to blend technologies, we have delivered value to several clients such as CTK Cosmetics, TravelSouq, WEM APAC, GEOSPOC, AREA 51 and. Many more are digitizing their journey to transformational adoption and we would love to be a part of your success story too.
- Designing and building mobile applications for Apple’s iOS platform.
- Collaborating with the design team to define app features.
- Ensuring quality and performance of the application to specifications.
- Identifying potential problems and resolving application bottlenecks.
- Fixing application bugs before the final release.
- Publishing application on App Store.
- Maintaining the code and atomization of the application.
- Designing and implementing application updates.
empower healthcare payers, providers and members to quickly process medical data to
make informed decisions and reduce health care costs. You will be focusing on research,
development, strategy, operations, people management, and being a thought leader for
team members based out of India. You should have professional healthcare experience
using both structured and unstructured data to build applications. These applications
include but are not limited to machine learning, artificial intelligence, optical character
recognition, natural language processing, and integrating processes into the overall AI
pipeline to mine healthcare and medical information with high recall and other relevant
metrics. The results will be used dually for real-time operational processes with both
automated and human-based decision making as well as contribute to reducing
healthcare administrative costs. We work with all major cloud and big data vendors
offerings including (Azure, AWS, Google, IBM, etc.) to achieve our goals in healthcare and
support
The Director, Data Science will have the opportunity to build a team, shape team culture
and operating norms as a result of the fast-paced nature of a new, high-growth
organization.
• Strong communication and presentation skills to convey progress to a diverse group of stakeholders
• Strong expertise in data science, data engineering, software engineering, cloud vendors, big data technologies, real-time streaming applications, DevOps and product delivery
• Experience building stakeholder trust and confidence in deployed models especially via application of the algorithmic bias, interpretable machine learning,
data integrity, data quality, reproducible research and reliable engineering 24x7x365 product availability, scalability
• Expertise in healthcare privacy, federated learning, continuous integration and deployment, DevOps support
• Provide mentoring to data scientists and machine learning engineers as well as career development
• Meet project related team members for individual specific needs on a regular basis related to project/product deliverables
• Provide training and guidance for team members when required
• Provide performance feedback when required by leadership
The Experience You’ll Need (Required):
• MS/M.Tech degree or PhD in Computer Science, Mathematics, Physics or related STEM fields
• Significant healthcare data experience including but not limited to usage of claims data
• Delivered multiple data science and machine learning projects over 8+ years with values exceeding $10 Million or more and has worked on platform members exceeding 10 million lives
• 9+ years of industry experience in data science, machine learning, and artificial intelligence
• Strong expertise in data science, data engineering, software engineering, cloud vendors, big data technologies, real time streaming applications, DevOps, and product delivery
• Knows how to solve and launch real artificial intelligence and data science related problems and products along with managing and coordinating the
business process change, IT / cloud operations, meeting production level code standards
• Ownerships of key workflows part of data science life cycle like data acquisition, data quality, and results
• Experience building stakeholder trust and confidence in deployed models especially via application of algorithmic bias, interpretable machine learning,
data integrity, data quality, reproducible research, and reliable engineering 24x7x365 product availability, scalability
• Expertise in healthcare privacy, federated learning, continuous integration and deployment, DevOps support
• 3+ Years of experience managing directly five (5) or more senior level data scientists, machine learning engineers with advanced degrees and directly
made staff decisions
• Very strong understanding of mathematical concepts including but not limited to linear algebra, advanced calculus, partial differential equations, and
statistics including Bayesian approaches at master’s degree level and above
• 6+ years of programming experience in C++ or Java or Scala and data science programming languages like Python and R including strong understanding of
concepts like data structures, algorithms, compression techniques, high performance computing, distributed computing, and various computer architecture
• Very strong understanding and experience with traditional data science approaches like sampling techniques, feature engineering, classification, and
regressions, SVM, trees, model evaluations with several projects over 3+ years
• Very strong understanding and experience in Natural Language Processing,
reasoning, and understanding, information retrieval, text mining, search, with
3+ years of hands on experience
• Experience with developing and deploying several products in production with
experience in two or more of the following languages (Python, C++, Java, Scala)
• Strong Unix/Linux background and experience with at least one of the
following cloud vendors like AWS, Azure, and Google
• Three plus (3+) years hands on experience with MapR \ Cloudera \ Databricks
Big Data platform with Spark, Hive, Kafka etc.
• Three plus (3+) years of experience with high-performance computing like
Dask, CUDA distributed GPU, TPU etc.
• Presented at major conferences and/or published materials