
About Casey Foods Opc Pvt Ltd
About
Connect with the team
Company social profiles
Similar jobs
Overview
adesso India specialises in optimization of core business processes for organizations. Our focus is on providing state-of-the-art solutions that streamline operations and elevate productivity to new heights.
Comprised of a team of industry experts and experienced technology professionals, we ensure that our software development and implementations are reliable, robust, and seamlessly integrated with the latest technologies. By leveraging our extensive knowledge and skills, we empower businesses to achieve their objectives efficiently and effectively.
Job Description
We are seeking a skilled Cloud Data Engineer who has experience with cloud data platforms like AWS or Azure and especially Snowflake and dbt to join our dynamic team. As a consultant, you will be responsible for developing new data platforms and create the data processes. You will collaborate with cross-functional teams to design, develop, and deploy high-quality frontend solutions.
Responsibilities:
Customer consulting: You develop data-driven products in the Snowflake Cloud and connect data & analytics with specialist departments. You develop ELT processes using dbt (data build tool)
Specifying requirements: You develop concrete requirements for future-proof cloud data architectures.
Develop data routes: You design scalable and powerful data management processes.
Analyze data: You derive sound findings from data sets and present them in an understandable way.
Requirements:
Requirements management and project experience: You successfully implement cloud-based data & analytics projects.
Data architectures: You are proficient in DWH/data lake concepts and modeling with Data Vault 2.0.
Cloud expertise: You have extensive knowledge of Snowflake, dbt and other cloud technologies (e.g. MS Azure, AWS, GCP).
SQL know-how: You have a sound and solid knowledge of SQL.
Data management: You are familiar with topics such as master data management and data quality.
Bachelor's degree in computer science, or a related field.
Strong communication and collaboration abilities to work effectively in a team environment.
Skills & Requirements
Cloud Data Engineering, AWS, Azure, Snowflake, dbt, ELT processes, Data-driven consulting, Cloud data architectures, Scalable data management, Data analysis, Requirements management, Data warehousing, Data lake, Data Vault 2.0, SQL, Master data management, Data quality, GCP, Strong communication, Collaboration.
Exp 1 to 8 yrs. Ctc 4 to 20 lpa
A malware analyst examines malicious software, such as bots, worms, and trojans to understand the nature of their threat. This task usually involves reverse-engineering the compiled executable and examining how the program interacts with its environment. The analyst may be asked to document the specimen's attack capabilities, understand its propagation characteristics, and define signatures for detecting its presence. A malware analyst is sometimes called a reverse engineer.
Security product companies, in industries such as anti-virus or network intrusion prevention, may hire malware analysts to develop ways of blocking malicious code. Large organizations in non-security industries may also hire full-time malware analysts to help protect their environment from attacks, or to respond to incidents that involve malicious software. Malware analysis skills are also valued by companies that cannot justify hiring full-time people to perform this work, but who wish their security or IT administrators to be able to examine malicious software when the need arises.



- responsible for building .NET applications.
- WEB-API OR MVC with C# is a must.
- Hands-on SQL Server with Expertise in complex Procedures and SQL Performance Optimizations.
- Being good with Front End, React or Angular will be a plus but not Mandatory.
- Knowledge of Javascript, Jquery Ajax.
- In-depth understanding of Agile, Scrum and Kanban
Should have 2-4 Years of experience in Product design
You’ll be our: Application Developer
You’ll be based at: IBC Knowledge Park, Bengaluru
You’ll be Aligned with: Application Development Lead
You’ll be the member of:VehicleSoftware Team
What you’ll do at Ather:
-
You will be developing the various Applications for the cloud to process data connecting with the vehicle
-
You shall work with the Lead and product team to define and deliver features.
-
Be part of the Agile team, to define Sprint work content definition.
-
Work with cross functional teams to drive overall feature delivery.
-
Be part of the initiatives that achieve the functional requirements of the team. Explore technical feasibility of solutions, propose and evaluate tech stacks.
-
Performance tuning and benchmarking of the features
Here’s what we are looking for:
-
Strong knowledge and Hands on experience with Nodejs, RestAPI, Java Script, Java Development experience in cloud based infrastructure
-
Strong fundamentals in any fullstack of your choice, OOPS concepts, Google Cloud Platform. Knowledge of Java, React, React Native, Android Native , HTML5, Firebase and Go Programming is a plus.
-
Knowledge on CI/CD pipelines, JIRA, UML, Static Code Analysis tools and Unit test framework is required
-
Knowledge on the S/W communication protocols like gRPC, HTTP, MQTT is a plus Cross platform development experience would be needed.
-
Knowledge of Cloud Based solutions would be a plus.
-
App development for a product which is beyond a mobile platform would be a plus. Experience with Telematics/Infotainment projects would be a great advantage.
-
Strong experience with Agile methodology, along with tools such as JIRA, Confluence.
-
Flashing and debugging tools experience is a plus.
You bring to Ather:
-
Bachelor’s/Master’s in computer science or any other equivalent degree
• Team Lead (6-8 yrs of total experience), minimum 3 years of Team Lead exp. required
• Knowledge of full technology stack and experience of leading a team
• Proposing right data structures
• Deciding the right technology stack
• Process Flow Designing
• JIRA tool (Project management tool)
Job Description:
• Lead end-to-end IT needs of the division and build a strong full-stack delivery team
• Lead directly (hands-on) the building and running of best-in class platform for online learning
and application portal
• Incorporate key features including adaptive learning, analytics tracking capabilities,
streamlining application processes on the portal, machine learning for automatic university
recommendation system
• Ensure scalability, performance, resilience and device portability of the platform, including
adequate backups, disaster recovery, pen testing, network security etc. to avoid data
breaches, given the high volume of sensitive data
• Lead collaboration with the development teams, project management, IT service and support
teams and other to support their activities, respond to their needs and respond to incidents
Candidate profile:
Ideally, you’d also have experience with:
• Developing and maintaining products that are used by many thousands or millions of people
• Ruby, Rails, React, Node.js, JavaScript, or Python
• Kafka, Amazon MQ, RabbitMQ or similar streaming or messaging systems
• EdTech, eCommerce, or content-management software
• Relational databases, ORM frameworks, and their alternatives
• Microservices or SOA; RESTful APIs, JSON
• AWS, Docker, Kubernetes, ELK stack
• DevOps, Observability, Infrastructure as Code
Requirements
Technical Skills
- Ability to solution & deliver all of Operations/SRE services & processes including managing L2 Environment Support
- 5-12 years of overall environment support experience with 5+ years of experience as support / SRE engineer
- Experience in implementing Monitoring solutions using APM tools( Example: AppDynamics, Graylog, Dynatrace, Datadog etc.) set up and test proactive monitoring alerts
- Have a broad knowledge profile and really excel in some areas, such as HTTP/TLS, DNS, networking or containerization
- Comfortable with large scale production systems and technologies, for example load balancing, monitoring, distributed systems, microservices, and configuration management.
Process Skills
- Systematic problem-solving approach, coupled with strong communication skills and a sense of ownership and drive
- Interest in designing, analyzing and troubleshooting large-scale distributed systems.
Behavioral Skills
- Practice sustainable incident response and blameless postmortems.
- Proven ability in developing relationships with stakeholders, communicating project/program status, and understanding detailed business requirements across multiple project initiatives
- This role requires candidates to work in rotational shifts. 24*7 support
Benefits
LOCATION: Mumbai
COMPENSATION: Competitive
WHY ZYCUS? :
- Be a part of one of the fastest growing product Company in India
- Come join a young, dynamic & enterprising team
- Work on the latest technologies
- Flexible working hours (As per business requirement).
Zycus Global Leader Procurement: https://www.zycus.com/newsroom/press-releases.html" target="_blank">https://www.zycus.com/newsroom/press-releases.html




DataWeave provides Retailers and Brands with “Competitive Intelligence as a Service” that enables them to take key decisions that impact their revenue. Powered by AI, we provide easily consumable and actionable competitive intelligence by aggregating and analyzing billions of publicly available data points on the Web to help businesses develop data-driven strategies and make smarter decisions.
Data Science@DataWeave
We the Data Science team at DataWeave (called Semantics internally) build the core machine learning backend and structured domain knowledge needed to deliver insights through our data products. Our underpinnings are: innovation, business awareness, long term thinking, and pushing the envelope. We are a fast paced labs within the org applying the latest research in Computer Vision, Natural Language Processing, and Deep Learning to hard problems in different domains.
How we work?
It's hard to tell what we love more, problems or solutions! Every day, we choose to address some of the hardest data problems that there are. We are in the business of making sense of messy public data on the web. At serious scale!
What do we offer?
- Some of the most challenging research problems in NLP and Computer Vision. Huge text and image datasets that you can play with!
- Ability to see the impact of your work and the value you're adding to our customers almost immediately.
- Opportunity to work on different problems and explore a wide variety of tools to figure out what really excites you.
- A culture of openness. Fun work environment. A flat hierarchy. Organization wide visibility. Flexible working hours.
- Learning opportunities with courses and tech conferences. Mentorship from seniors in the team.
- Last but not the least, competitive salary packages and fast paced growth opportunities.
Who are we looking for?
The ideal candidate is a strong software developer or a researcher with experience building and shipping production grade data science applications at scale. Such a candidate has keen interest in liaising with the business and product teams to understand a business problem, and translate that into a data science problem. You are also expected to develop capabilities that open up new business productization opportunities.
We are looking for someone with 6+ years of relevant experience working on problems in NLP or Computer Vision with a Master's degree (PhD preferred).
Key problem areas
- Preprocessing and feature extraction noisy and unstructured data -- both text as well as images.
- Keyphrase extraction, sequence labeling, entity relationship mining from texts in different domains.
- Document clustering, attribute tagging, data normalization, classification, summarization, sentiment analysis.
- Image based clustering and classification, segmentation, object detection, extracting text from images, generative models, recommender systems.
- Ensemble approaches for all the above problems using multiple text and image based techniques.
Relevant set of skills
- Have a strong grasp of concepts in computer science, probability and statistics, linear algebra, calculus, optimization, algorithms and complexity.
- Background in one or more of information retrieval, data mining, statistical techniques, natural language processing, and computer vision.
- Excellent coding skills on multiple programming languages with experience building production grade systems. Prior experience with Python is a bonus.
- Experience building and shipping machine learning models that solve real world engineering problems. Prior experience with deep learning is a bonus.
- Experience building robust clustering and classification models on unstructured data (text, images, etc). Experience working with Retail domain data is a bonus.
- Ability to process noisy and unstructured data to enrich it and extract meaningful relationships.
- Experience working with a variety of tools and libraries for machine learning and visualization, including numpy, matplotlib, scikit-learn, Keras, PyTorch, Tensorflow.
- Use the command line like a pro. Be proficient in Git and other essential software development tools.
- Working knowledge of large-scale computational models such as MapReduce and Spark is a bonus.
- Be a self-starter—someone who thrives in fast paced environments with minimal ‘management’.
- It's a huge bonus if you have some personal projects (including open source contributions) that you work on during your spare time. Show off some of your projects you have hosted on GitHub.
Role and responsibilities
- Understand the business problems we are solving. Build data science capability that align with our product strategy.
- Conduct research. Do experiments. Quickly build throw away prototypes to solve problems pertaining to the Retail domain.
- Build robust clustering and classification models in an iterative manner that can be used in production.
- Constantly think scale, think automation. Measure everything. Optimize proactively.
- Take end to end ownership of the projects you are working on. Work with minimal supervision.
- Help scale our delivery, customer success, and data quality teams with constant algorithmic improvements and automation.
- Take initiatives to build new capabilities. Develop business awareness. Explore productization opportunities.
- Be a tech thought leader. Add passion and vibrance to the team. Push the envelope. Be a mentor to junior members of the team.
- Stay on top of latest research in deep learning, NLP, Computer Vision, and other relevant areas.


