Working along with the highly motivated advanced Machine Learning team, with key responsibilities are to research, design, develop, and implement applications that will be integrated into our workflows.
Responsibilities and Accountabilities:
III Skill Set & Personality Traits required:
About Uber9 Business Process Services Pvt Ltd
India's Largest Online Platform For Legal, Tax and Compliance Services. - https://t.co/f4giVXnWD7
Vakilsearch is India's largest online legal, tax and compliance provider. Vakilsearch, through its products and end-to-end workflow automation journey has revolutionized how Start-ups/ Small & Medium Enterprises register, seamlessly run and comply with Government regulations. On our mission to provide one-click access to individuals and businesses for all their legal and professional needs, we have helped over 4 Lac start-ups/ small and medium enterprises to date.
Visit us on www.vakilsearch.com
Vakilsearch in recent news: https://economictimes.indiatimes.com/tech/funding/incorp-india-invests-10-million-in-vakilsearch/articleshow/87352272.cms
Vakilsearch is a people-first organisation that thrives on the enthusiasm of our team to execute our mission to the satisfaction of our customers. Towards this end, we stress on creating an optimal work-life balance and inculcating a strong sense of team spirit that stems from enthusiasm and good vibes. When you work at Vakilsearch, you don't just become an employee, you become family, and we always coalesce around each other to ensure a strong sense of family.
They started with a singular belief - what is beautiful cannot and should not be defined in marketing meetings. It's defined by the regular people like us, our sisters, our next-door neighbours, and the friends we make on the playground and in lecture halls. That's why we stand for people-proving everything we do. From the inception of a product idea to testing the final formulations before launch, our consumers are a part of each and every process. They guide and inspire us by sharing their stories with us. They tell us not only about the product they need and the skincare issues they face but also the tales of their struggles, dreams and triumphs. Skincare goes deeper than skin. It's a form of self-care for many. Wherever someone is on this journey, we want to cheer them on through the products we make, the content we create and the conversations we have. What we wish to build is more than a brand. We want to build a community that grows and glows together - cheering each other on, sharing knowledge, and ensuring people always have access to skincare that really works.
We are seeking a skilled and motivated Data Engineer to join our team. As a Data Engineer, you will be responsible for designing, developing, and maintaining the data infrastructure and systems that enable efficient data collection, storage, processing, and analysis. You will collaborate with cross-functional teams, including data scientists, analysts, and software engineers, to implement data pipelines and ensure the availability, reliability, and scalability of our data platform.
Design and implement scalable and robust data pipelines to collect, process, and store data from various sources.
Develop and maintain data warehouse and ETL (Extract, Transform, Load) processes for data integration and transformation.
Optimize and tune the performance of data systems to ensure efficient data processing and analysis.
Collaborate with data scientists and analysts to understand data requirements and implement solutions for data modeling and analysis.
Identify and resolve data quality issues, ensuring data accuracy, consistency, and completeness.
Implement and maintain data governance and security measures to protect sensitive data.
Monitor and troubleshoot data infrastructure, perform root cause analysis, and implement necessary fixes.
Stay up-to-date with emerging technologies and industry trends in data engineering and recommend their adoption when appropriate.
Bachelor’s or higher degree in Computer Science, Information Systems, or a related field.
Proven experience as a Data Engineer or similar role, working with large-scale data processing and storage systems.
Strong programming skills in languages such as Python, Java, or Scala.
Experience with big data technologies and frameworks like Hadoop, Spark, or Kafka.
Proficiency in SQL and database management systems (e.g., MySQL, PostgreSQL, or Oracle).
Familiarity with cloud platforms like AWS, Azure, or GCP, and their data services (e.g., S3, Redshift, BigQuery).
Solid understanding of data modeling, data warehousing, and ETL principles.
Knowledge of data integration techniques and tools (e.g., Apache Nifi, Talend, or Informatica).
Strong problem-solving and analytical skills, with the ability to handle complex data challenges.
Excellent communication and collaboration skills to work effectively in a team environment.
Advanced knowledge of distributed computing and parallel processing.
Experience with real-time data processing and streaming technologies (e.g., Apache Kafka, Apache Flink).
Familiarity with machine learning concepts and frameworks (e.g., TensorFlow, PyTorch).
Knowledge of containerization and orchestration technologies (e.g., Docker, Kubernetes).
Experience with data visualization and reporting tools (e.g., Tableau, Power BI).
Certification in relevant technologies or data engineering disciplines.
Company: Hex PixelPhant Pvt Ltd
Location: Udaipur, RJ / Hybrid / Remote
Job Type: Full-time
We are seeking 2+ experienced MachineLearning Ops / MachineLearning Experts to lead the development and deployment of our new product - an AI-powered product photo editing system. This product is aimed to revolutionize product photo editing for eCommerce businesses by automating the process through machine learning.
- Design, develop, and maintain scalable end-to-end machine learning pipelines.
- Deploy models using TensorFlow and AWS Sagemaker.
- Use OpenCV for various image processing tasks and implementing algorithms.
- Incorporate NLP basics where required for tasks such as metadata tagging or chatbot development.
- Utilize Generative AI techniques for tasks such as image generation, manipulation, or quality enhancement.
- Employ Stable Diffusion for generating realistic product images.
- Develop and maintain Google Colab notebooks for model development and testing.
- Collaborate closely with the product development team to integrate AI models into the product.
- Ensure efficient use of computational resources.
- Keep up-to-date with the latest industry trends and technologies to ensure our AI systems are cutting-edge.
- Bachelor's or Master's degree in Computer Science, AI, Machine Learning, or a related field. Ph.D. is a plus.
- Proficient in TensorFlow, Google Colab, OpenCV, and AWS Sagemaker.
- Solid understanding of Natural Language Processing (NLP) and Generative AI techniques.
- Familiarity with Stable Diffusion Models.
- Strong knowledge of Python and experience with scripting in a Linux environment.
- Experience in working with Docker containers and Kubernetes for deployment is highly desirable.
- Familiarity with other AWS services (such as EC2, S3, Lambda, etc.).
- Understanding of CI/CD practices and experience in their implementation in ML contexts.
- Ability to communicate complex data concepts and insights to stakeholders at all levels.
- Previous experience in building ML models for image processing or photo editing applications is a plus.
What We Offer:
- Competitive salary package.
- Opportunity to work with a dynamic, innovative team.A vibrant, inclusive culture that celebrates diversity.
- Learning and development opportunities.
- PixelPhant is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees.
How to Apply: please send your updated CV, a link to your GitHub or portfolio demonstrating relevant work, and a brief note about why you're interested in this role to our company mail id or connect on Linkedin
Intuitive cloud ( www.intuitive.cloud ) is one of the fastest growing top-tier Cloud Solutions and SDx Engineering solution and service company supporting 80+ Global Enterprise Customer across Americas, Europe and Middle East.
Intuitive is a recognized professional and manage service partner for core superpowers in cloud(public/ Hybrid), security, GRC, DevSecOps, SRE, Application modernization/ containers/ K8 -as-a- service and cloud application delivery.
- 9+ years’ experience as data engineer.
- Must have 4+ Years in implementing data engineering solutions with Databricks.
- This is hands on role building data pipelines using Databricks. Hands-on technical experience with Apache Spark.
- Must have deep expertise in one of the programming languages for data processes (Python, Scala). Experience with Python, PySpark, Hadoop, Hive and/or Spark to write data pipelines and data processing layers
- Must have worked with relational databases like Snowflake. Good SQL experience for writing complex SQL transformation.
- Performance Tuning of Spark SQL running on S3/Data Lake/Delta Lake/ storage and Strong Knowledge on Databricks and Cluster Configurations.
- Hands on architectural experience
- Nice to have Databricks administration including security and infrastructure features of Databricks.
About FarMart At FarMart we are building the world’s first OS powering food value chains. By digitizing and incentivizing the rural agri-retailer, FarMart has created one-stop hubs for farmers to buy input and sell output in close proximity to their farms. This alternative, asset-light, food value chain eliminates the considerable transportation costs, spillages, and time effort for both the producer and the end-buyer. Are you passionate about the intersection of tech and food?
Role: Data Scientist II
Experience: 2-4 years
Are you a beginner whose eyes light up when you see the progress bar of your model training or are you an experienced data professional whose heart sinks as the model loss starts to climb up? If that’s you, we like you already. Do you think about problem statements when you are on a cab ride or do you open up blog articles to entertain and enlighten you in boring meetings? If that’s you, we like you more now. All in all, you must have an insatiable hunger for knowledge and a team player attitude!
- Understand and optimize the data infrastructure
- Develop visualization dashboards for the business and operations team - Setup and own data acquisition for several external sources
- Manage and clean the data for use by several systems
- Develop state-of-the-art Deep Learning/Classical models
- Deploy and Maintain production services
- Contribute to the community through open-source, blogs, etc.
What are we looking for
- Deep understanding of core concepts
- Broader knowledge of different types of problem statements and approaches
- Excellent hold on Python and the standard library
- Knowledge of industry-standard tools like scikit-learn, TensorFlow/PyTorch, etc.
- Experience with Computer Vision, Forecasting, and NLP will come in handy.
- A get shit done attitude
- A research mindset and a creative caliber to utilize previous work to your advantage.
LodgIQ is led by a team of experienced hospitality technology experts, data scientists and product domain experts. Seed funded by Highgate Ventures, a venture capital platform focused on early stage technology investments in the hospitality industry and Trilantic Capital Partners, a global private equity firm, LodgIQ has made a significant investment in advanced machine learning platforms and data science.
Title : Data Scientist
- Apply Data Science and Machine Learning to a REAL-LIFE problem - “Predict Guest Arrivals and Determine Best Prices for Hotels”
- Apply advanced analytics in a BIG Data Environment – AWS, MongoDB, SKLearn
- Help scale up the product in a global offering across 100+ global markets
- Minimum 3 years of experience with advanced data analytic techniques, including data mining, machine learning, statistical analysis, and optimization. Student projects are acceptable.
- At least 1 year of experience with Python / Numpy / Pandas / Scipy/ MatPlotLib / Scikit-Learn
- Experience in working with massive data sets, including structured and unstructured with at least 1 prior engagement involving data gathering, data cleaning, data mining, and data visualization
- Solid grasp over optimization techniques
- Master's or PhD degree in Business Analytics. Data science, Statistics or Mathematics
- Ability to show a track record of solving large, complex problems
Department: - Engineering
Bidgely is looking for extraordinary and dynamic Senior Data Analyst to be part of its core team in Bangalore. You must have delivered exceptionally high quality robust products dealing with large data. Be part of a highly energetic and innovative team that believes nothing is impossible with some creativity and hard work.
● Design and implement a high volume data analytics pipeline in Looker for Bidgely flagship product.
● Implement data pipeline in Bidgely Data Lake
● Collaborate with product management and engineering teams to elicit & understand their requirements & challenges and develop potential solutions
● Stay current with the latest tools, technology ideas and methodologies; share knowledge by clearly articulating results and ideas to key decision makers.
● 3-5 years of strong experience in data analytics and in developing data pipelines.
● Very good expertise in Looker
● Strong in data modeling, developing SQL queries and optimizing queries.
● Good knowledge of data warehouse (Amazon Redshift, BigQuery, Snowflake, Hive).
● Good understanding of Big data applications (Hadoop, Spark, Hive, Airflow, S3, Cloudera)
● Attention to details. Strong communication and collaboration skills.
● BS/MS in Computer Science or equivalent from premier institutes.
- Data & Analytics team is responsible to integrate new data sources and build data models, data dictionaries and machine learning models for the Wholesale Bank.
- The goal is to design and build data products to support squads in Wholesale Bank with business outcomes and development of business insights. In this Job Family we make a distinction between Data Analysts and Data Scientist. Both scientists as analysts work with data and are expected to write queries, work with engineering teams to source the right data, perform data munging (getting data into the correct format, convenient for analysis/interpretation) and derive information from data.
- The data analyst typically works on simpler structured SQL or similar databases or with other BI tools/packages. The Data Scientists are expected to build statistical models or be hands-on in machine learning and advanced programming.
- Role of Data Scientist to support our Corporate banking teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have banking or corporate banking experience.
6 Years - 10 Years
- Should be comfortable in solving Wholesale Banking domain analytical solution within AI/ML platform
1. Must have a very good hands-on technical experience of 3+ years with JAVA or Python
2. Working experience and good understanding of AWS Cloud; Advanced experience with IAM policy and role management
3. Infrastructure Operations: 5+ years supporting systems infrastructure operations, upgrades, deployments using Terraform, and monitoring
4. Hadoop: Experience with Hadoop (Hive, Spark, Sqoop) and / or AWS EMR
5. Knowledge on PostgreSQL/MySQL/Dynamo DB backend operations
6. DevOps: Experience with DevOps automation - Orchestration/Configuration Management and CI/CD tools (Jenkins)
7. Version Control: Working experience with one or more version control platforms like GitHub or GitLab
8. Knowledge on AWS Quick sight reporting
9. Monitoring: Hands on experience with monitoring tools such as AWS CloudWatch, AWS CloudTrail, Datadog and Elastic Search
10. Networking: Working knowledge of TCP/IP networking, SMTP, HTTP, load-balancers (ELB) and high availability architecture
11. Security: Experience implementing role-based security, including AD integration, security policies, and auditing in a Linux/Hadoop/AWS environment. Familiar with penetration testing and scan tools for remediation of security vulnerabilities.
12. Demonstrated successful experience learning new technologies quickly
WHAT WILL BE THE ROLES AND RESPONSIBILITIES?
1. Create procedures/run books for operational and security aspects of AWS platform
2. Improve AWS infrastructure by developing and enhancing automation methods
3. Provide advanced business and engineering support services to end users
4. Lead other admins and platform engineers through design and implementation decisions to achieve balance between strategic design and tactical needs
5. Research and deploy new tools and frameworks to build a sustainable big data platform
6. Assist with creating programs for training and onboarding for new end users
7. Lead Agile/Kanban workflows and team process work
8. Troubleshoot issues to resolve problems
9. Provide status updates to Operations product owner and stakeholders
10. Track all details in the issue tracking system (JIRA)
11. Provide issue review and triage problems for new service/support requests
12. Use DevOps automation tools, including Jenkins build jobs
13. Fulfil any ad-hoc data or report request queries from different functional groups