

Similar jobs


Job Description
We are looking for an experienced engineer to join our data science team, who will help us design, develop, and deploy machine learning models in production. You will develop robust models, prepare their deployment into production in a controlled manner, while providing appropriate means to monitor their performance and stability after deployment.
What You’ll Do will include (But not limited to):
- Preparing datasets needed to train and validate our machine learning models
- Anticipate and build solutions for problems that interrupt availability, performance, and stability in our systems, services, and products at scale.
- Defining and implementing metrics to evaluate the performance of the models, both for computing performance (such as CPU & memory usage) and for ML performance (such as precision, recall, and F1)
- Supporting the deployment of machine learning models on our infrastructure, including containerization, instrumentation, and versioning
- Supporting the whole lifecycle of our machine learning models, including gathering data for retraining, A/B testing, and redeployments
- Developing, testing, and evaluating tools for machine learning models deployment, monitoring, retraining.
- Working closely within a distributed team to analyze and apply innovative solutions over billions of documents
- Supporting solutions ranging from rule-bases, classical ML techniques to the latest deep learning systems.
- Partnering with cross-functional team members to bring large scale data engineering solutions to production
- Communicating your approach and results to a wider audience through presentations
Your Qualifications:
- Demonstrated success with machine learning in a SaaS or Cloud environment, with hands–on knowledge of model creation and deployments in production at scale
- Good knowledge of traditional machine learning methods and neural networks
- Experience with practical machine learning modeling, especially on time-series forecasting, analysis, and causal inference.
- Experience with data mining algorithms and statistical modeling techniques for anomaly detection in time series such as clustering, classification, ARIMA, and decision trees is preferred.
- Ability to implement data import, cleansing and transformation functions at scale
- Fluency in Docker, Kubernetes
- Working knowledge of relational and dimensional data models with appropriate visualization techniques such as PCA.
- Solid English skills to effectively communicate with other team members
Due to the nature of the role, it would be nice if you have also:
- Experience with large datasets and distributed computing, especially with the Google Cloud Platform
- Fluency in at least one deep learning framework: PyTorch, TensorFlow / Keras
- Experience with No–SQL and Graph databases
- Experience working in a Colab, Jupyter, or Python notebook environment
- Some experience with monitoring, analysis, and alerting tools like New Relic, Prometheus, and the ELK stack
- Knowledge of Java, Scala or Go-Lang programming languages
- Familiarity with KubeFlow
- Experience with transformers, for example the Hugging Face libraries
- Experience with OpenCV
About Egnyte
In a content critical age, Egnyte fuels business growth by enabling content-rich business processes, while also providing organizations with visibility and control over their content assets. Egnyte’s cloud-native content services platform leverages the industry’s leading content intelligence engine to deliver a simple, secure, and vendor-neutral foundation for managing enterprise content across business applications and storage repositories. More than 16,000 customers trust Egnyte to enhance employee productivity, automate data management, and reduce file-sharing cost and complexity. Investors include Google Ventures, Kleiner Perkins, Caufield & Byers, and Goldman Sachs. For more information, visit www.egnyte.com
#LI-Remote

You will be designing front-end architecture with architectural guidelines in mind (secure,high-performing, scalable, extensible, flexible, simple).
● Architect, Design and develop front-end applications.
Desired Skills
● Minimum 4+ years of coding experience.
● Proficiency using modern web development technologies and techniques, including HTML5, CSS, JavaScript, etc.
● Knowledge of professional software engineering practices & best practices for the full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations.
● Programming experience with at least one modern framework such as ReactJS or VueJs.
● A strong product design sense.
● Understand end user requirements, formulate use cases and come up with effective solutions.
● Good understanding of REST APIs and the web in general.
● Ability to build a feature from scratch & drive it to completion.
● Excellence in technical communication with peers and non-technical cohorts.
● Ability to learn and work independently and make decisions with minimal supervision

Roles & Responsibilities:
- Build a high-quality web application from scratch.
- Responsive and adaptive web page implementation.
- Identify and communicate best practices for back-end engineering.
- Design, build and maintain efficient, reusable, and reliable code to ensure the applications' best possible performance, quality, and responsiveness.
- Implement automated testing platforms and unit tests.
- Prepare and maintain all applications utilizing standard development tools.
- Keep abreast of new trends and best practices in web development.
- Learn new Software/frameworks/ languages/ technology with ease.
- Must be able to apply innovative thinking at work.
Desired Skills:
- Good working knowledge of PHP-based MVC framework Laravel.
- Good knowledge of PHP, HTML, CSS & JavaScript and understanding of Database concepts & working knowledge of MySQL.
- A clear understanding of RESTful API development standards.
- Excellent problem-solving skills.
- Software testing (PHPUnit, PHPSpec)
Senior Database (PL/SQL)
Work Experience: 8+ Years
Number of Vacancies: 2
Location:
CTC: As per industry standards
Job Position: Oracle PLSQL Developer.
Required: Oracle Certified Database Developer
Key Skills:
- Must have basic knowledge in SQL Queries, Joins, DDL, DML, TCL, Types, Object, Collection Developer. Basic Oracle PLSQL programming experience (Procedures, packages, functions, exceptions.
- Develop, implement, and optimize stored procedures and functions using PLSQL
- Writing basic Queries, Package, Procedures, Functions, Triggers, Ref Cursors, Using Oracle 11g to 19c features, including triggers, stored procedures, queries, SQL Code, and design (stored procedures, functions, packages, tables, views, triggers, indexes, constraints, collections, bulk collects, etc..).
- Must have basic knowledge in PL/SQL Developer tool.
- Basic knowledge of MySql & Mongo DBA
- Strong communication skills
- Good interpersonal and teamwork skills
- PL/SQL, stored, procedure, functions, trigger
- Bulk Collection
- Utl_file
- Materilized View
- Performance handling
- Usage of Hint in Queries
- JSON (json object, json table, json queries)
- BLOB CLOB concept
- External table
- Dynamic SQL
About the Company: Among top five global media agencies. we provide access and scale everywhere our clients do business. Intelligent and imaginative, we create, integrate and scale technology-enabled services with premium partners
Job Location :Mumbai
Roles and Responsibilities
- Devising solutions for the client by identifying and configuring the technology, data and processes that improve media targeting, and deepen analytical learning. This can encompass comprehensive audits of a client’s current setup, as well as leading data and tech partner evaluations, and defining measurement frameworks to capture the most valuable signals
- Aid in data partner discovery, vetting, onboarding and cataloguing, providing the relevant teams with a view on the latest solutions in the market, as well as current trends and issues relating to data management
- Support the implementation of client audience data strategy programs, specifically how client data is captured, managed, enhanced (through partnerships), deployed and measured for effectiveness, working closely with adjacent disciplines in Planning, AdOps, Activation and Analytics to ensure audience management best practices flow down and through all client media campaigns
- Maintain strong, effective relationships with key partners and suppliers within the ad/mar-tech data space
- Build strong relationships with key client stakeholders, including communicating with them on subjects outside your remit, and build a reputation for excellence in communication
Desired background experience:
Required
- Demonstrable knowledge of the ad tech and data landscape, including hands-on experience managing audience data, for your own company or your clients. This includes, a working knowledge of the leading data management platforms (DMPs), customer data platforms (CDPs), knowledge of data onboarding and cross-device matching approaches and an understanding of the tools and technologies that can activate this data (e,g, DSPs, social platforms)
- Familiarity with and experienced in working with cloud-based data solutions (e.g. Google Cloud Platform, Amazon Web services) and tools within (e.g. BigQuery, Ads Data Hub) for data warehousing, insight generation and/or deployment.
- Strong analytical skills and a natural affinity for numbers is key; You must be able to analyze raw data, draw insights and develop actionable recommendations as needed
- Exceptional verbal & written communication skills, able to build and develop strong relationships with, and communicate effectively with people at all levels of seniority; in-agency, client-side, and in the supplier space
- Strong organizational skills, including experience with resource management, to effectively manage smooth flow of work through agency
Regards
Team Merito

- Analyze and organize raw data
- Build data systems and pipelines
- Evaluate business needs and objectives
- Interpret trends and patterns
- Conduct complex data analysis and report on results
- Build algorithms and prototypes
- Combine raw information from different sources
- Explore ways to enhance data quality and reliability
- Identify opportunities for data acquisition
- Should have experience in Python, Django Micro Service Senior developer with Financial Services/Investment Banking background.
- Develop analytical tools and programs
- Collaborate with data scientists and architects on several projects
- Should have 5+ years of experience as a data engineer or in a similar role
- Technical expertise with data models, data mining, and segmentation techniques
- Should have experience programming languages such as Python
- Hands-on experience with SQL database design
- Great numerical and analytical skills
- Degree in Computer Science, IT, or similar field; a Master’s is a plus
- Data engineering certification (e.g. IBM Certified Data Engineer) is a plus
asfs
fg
sadasdasdfasfasfdsgfdf


