Dori AI enables enterprises with AI-powered video analytics to significantly increase human productivity and improve process compliance. We leverage a proprietary full-stack end-to-end computer vision and deep learning platform to rapidly build and deploy AI solutions for enterprises. The platform was built with enterprise considerations including time-to-value, time-to-market, security, and scalability across a range of use cases. Capture visual data across multiple sites, leverage AI + Computer Vision to gather key insights, and make decisions with actionable visual insights. Launch CV applications in a matter of weeks that are optimized for both cloud and edge deployments.
We are looking for a Python Developer to analyze large amounts of raw information to find patterns that will help improve our company. We will rely on you to build data products to extract valuable business insights.
In this role, you should be highly analytical with a knack for analysis, math and statistics. Critical thinking and problem-solving skills are essential for interpreting data. We also want to see a passion for machine learning and research.
Your goal will be to help our company analyze trends to make better decisions.
1. 2 to 8 years of relevant industry experience
2. Experience in Linear algebra, statistics & Probability skills, such as distributions, Deep Learning, Machine Learning
3. Strong mathematical and statistics background is a must
4. Experience in machine learning frameworks such as Tensorflow, Caffe, PyTorch, or MxNet
5. Strong industry experience in using design patterns, algorithms and data structures
6. Industry experience in using feature engineering, model performance tuning, and optimizing machine learning models
7. Hands on development experience in Python and packages such as NumPy, Sci-Kit Learn and Matplotlib.
About Dori AI
At Dori, we develop platforms and services that enable artificial intelligence centered application development for mobile edge devices, embedded IoT devices, on-premise servers, and cloud platforms. The company provides a turnkey solution to add intelligence in applications by simplifying model development and deployment.
We have developed an AI-as-a-service platform that provides prebuilt and custom engines to evaluate, deploy, and monitor artificial intelligence systems for consumer and enterprise applications. Application developers can rapidly develop and deploy AI-enabled applications for multiple operating systems, hardware architectures, and cloud infrastructures.
- Convert the machine learning models into application program interfaces (APIs) so that other applications can use it
- Build AI models from scratch and help the different components of the organization (such as product managers and stakeholders) understand what results they gain from the model
- Build data ingestion and data transformation infrastructure
- Automate infrastructure that the data science team uses
- Perform statistical analysis and tune the results so that the organization can make better-informed decisions
- Set up and manage AI development and product infrastructure
- Be a good team player, as coordinating with others is a must
Job Location: Chennai
The Engineering team is seeking a Data Architect. As a Data Architect, you will drive a
Data Architecture strategy across various Data Lake platforms. You will help develop
reference architecture and roadmaps to build highly available, scalable and distributed
data platforms using cloud based solutions to process high volume, high velocity and
wide variety of structured and unstructured data. This role is also responsible for driving
innovation, prototyping, and recommending solutions. Above all, you will influence how
users interact with Conde Nast’s industry-leading journalism.
Data Architect is responsible for
• Demonstrated technology and personal leadership experience in architecting,
designing, and building highly scalable solutions and products.
• Enterprise scale expertise in data management best practices such as data integration,
data security, data warehousing, metadata management and data quality.
• Extensive knowledge and experience in architecting modern data integration
frameworks, highly scalable distributed systems using open source and emerging data
• Experience building external cloud (e.g. GCP, AWS) data applications and capabilities is
• Expert ability to evaluate, prototype and recommend data solutions and vendor
technologies and platforms.
• Proven experience in relational, NoSQL, ELT/ETL technologies and in-memory
• Experience with DevOps, Continuous Integration and Continuous Delivery technologies
• This role requires 15+ years of data solution architecture, design and development
• Solid experience in Agile methodologies (Kanban and SCRUM)
• Very Strong Experience in building Large Scale High Performance Data Platforms.
• Passionate about technology and delivering solutions for difficult and intricate
problems. Current on Relational Databases and No sql databases on cloud.
• Proven leadership skills, demonstrated ability to mentor, influence and partner with
cross teams to deliver scalable robust solutions..
• Mastery of relational database, NoSQL, ETL (such as Informatica, Datastage etc) /ELT
and data integration technologies.
• Experience in any one of Object Oriented Programming (Java, Scala, Python) and
• Creative view of markets and technologies combined with a passion to create the
• Knowledge on cloud based Distributed/Hybrid data-warehousing solutions and Data
Lake knowledge is mandate.
• Good understanding of emerging technologies and its applications.
• Understanding of code versioning tools such as GitHub, SVN, CVS etc.
• Understanding of Hadoop Architecture and Hive SQL
• Knowledge in any one of the workflow orchestration
• Understanding of Agile framework and delivery
● Experience in AWS and EMR would be a plus
● Exposure in Workflow Orchestration like Airflow is a plus
● Exposure in any one of the NoSQL database would be a plus
● Experience in Databricks along with PySpark/Spark SQL would be a plus
● Experience with the Digital Media and Publishing domain would be a
● Understanding of Digital web events, ad streams, context models
About Condé Nast
CONDÉ NAST INDIA (DATA)
Over the years, Condé Nast successfully expanded and diversified into digital, TV, and social
platforms - in other words, a staggering amount of user data. Condé Nast made the right
move to invest heavily in understanding this data and formed a whole new Data team
entirely dedicated to data processing, engineering, analytics, and visualization. This team
helps drive engagement, fuel process innovation, further content enrichment, and increase
market revenue. The Data team aimed to create a company culture where data was the
common language and facilitate an environment where insights shared in real-time could
The Global Data team operates out of Los Angeles, New York, Chennai, and London. The
team at Condé Nast Chennai works extensively with data to amplify its brands' digital
capabilities and boost online revenue. We are broadly divided into four groups, Data
Intelligence, Data Engineering, Data Science, and Operations (including Product and
Marketing Ops, Client Services) along with Data Strategy and monetization. The teams built
capabilities and products to create data-driven solutions for better audience engagement.
What we look forward to:
We want to welcome bright, new minds into our midst and work together to create diverse
forms of self-expression. At Condé Nast, we encourage the imaginative and celebrate the
extraordinary. We are a media company for the future, with a remarkable past. We are
Condé Nast, and It Starts Here.
- Architect and design for our customers' data-driven applications and solutions and own back-end technology
- Develop architectures that are inherently secure, robust, scalable, modular, and API-centric
- Build distributed backend systems serving real-time analytics and machine learning features at scale
- Own the scalability, performance, and performance metrics of complex distributed systems.
- Apply architecture best practices that help increase execution velocity
- Collaborate with the key stakeholders, like business, product, and other technology teams
- Mentor junior members in the team
- Excellent Academic Background (MS/B.Tech from a top tier university)
- 6-10 years of experience in backend architecture and development with large data volumes
- Extensive hands-on experience in the Big Data Ecosystem (like Hadoop, Spark, Presto, Hive), Database (like
- MySQL, PostgreSQL), NoSQL (like MongoDB, Cassandra), and Data Warehousing like Redshift
- Experience in cloud-based technology solutions with scale and robustness
- Strong data management and migration experience including proficiency in data warehousing, data quality, and analysis.
- Experience in the development of microservices/REST APIs
- Experience with Agile and DevOps development methodology and tools like Jira, Confluence
- Understanding/exposure to complete product development cycle
Job Description :
Sr. Machine Learning Engineer will support our various business vertical teams with insights gained from analyzing company data. The ideal candidate is adept at using large data sets to find opportunities for product and process optimization and using models to test the effectiveness of different courses of action. They must have strong experience using a variety of data mining/data analysis methods, using a variety of data tools, building and implementing models, using/creating algorithms and creating/running simulations. They must have a proven ability to drive business results with their data-based insights. They must be comfortable working with a wide range of stakeholders and functional teams. The right candidate will have a passion for discovering solutions hidden in large data sets and working with stakeholders to improve business outcomes.
- Collaborate with product management and engineering departments to understand company needs and devise possible solutions
- Keep up-to-date with latest technology trends
- Communicate results and ideas to key decision makers
- Implement new statistical or other mathematical methodologies as needed for specific models or analysis
- Optimize joint development efforts through appropriate database use and project design
Skills & Requirements :
Technical Skills :
- Demonstrated skill in the use of one or more analytic software tools or languages (e.g., R, Python, Pyomo, Julia/Jump, Matlab, SAS,SQL)
- Demonstrated skill at data cleansing, data quality assessment, and using analytics for data assessment
- End-to-end system design: data analysis, feature engineering, technique selection & implementation, debugging, and maintenance in production.
- Profound understanding of skills like outlier handling, data imputation, bias, variance, cross validation etc.
- Demonstrated skill in modeling techniques, including but not limited to Predictive modeling, Supervised learning, Unsupervised learning, Machine Learning, Statistical Modeling, Natural language processing, Recommendation engines,
- Demonstrated skill in analytic prototyping, analytic scaling, and solutions integration
- Developing hypotheses and set up your own problem frameworks to test for the best solutions
- Knowledge of data visualization tools - ggplot, Dash, d3.js and Matplottlib (or any other data visualization like Tableau, Qlikview)
- Generating insights for a business context
- Experience with cloud technologies for building, deploying and delivering data science applications is desired (preferably in Microsoft Azure)
- Experience in Tensorflow, Keras, Theano, Text Mining is desirable but not mandatory
- Experience to work in Agile and DevOps processes.
Core Skills :
- Bachelor or master degree in information technology, computer science, business administration or a related discipline.
- Certified in Agile Product Owner / SCRUM master and/or other Agile techniques
Leadership Skills :
- Strong stakeholder management and influencing skills. Able to articulate a vision and build support for that vision in the wider team and organization.
- Ability to self-start and direct efforts based on high-level business objectives
- Strong collaboration and leadership skills with the ability to coach and develop teams to meet new challenges.
- Strong interpersonal, communication, facilitation and presentation skills.
- Work through complex interfaces across organizational and geographic boundaries
- Excellent analytical, planning and problem solving skills
Job Experience Requirements :
- Utilize an advanced knowledge level of the Data Science Toolbox to participate in the entire Data Science Project Life cycle and execute end-to-end Data Science project
- Work end-to-end on Data Science developments contributing to all aspects of the project life cycle
- Keep customers as focus of analysis insight and recommendation.
- Help define business objectives/customer needs by capturing the right requirements from the right customers.
- Can take defined problems and identify resolution paths and opportunities to solve them; which you validate by defining hypotheses and driving experiments
- Can identify unstructured problems and articulate opportunities to form new analytics project ideas
- Use and understand the key performance indicators (KPIs) and diagnostics to measure performance against business goals
- Compile integrate and analyze data from multiple sources to identify trends expose new opportunities and answer ongoing business questions
- Execute hypothesis-driven analysis to address business questions issues and opportunities
- Build validate and manage advanced models (e.g. explanatory predictive) using statistical and/or other analytical methods
- Are familiar working within Agile Project Management methodologies / structures
- Analyze results using statistical methods and work with senior team members to make recommendations to improve customer experience and business results
- Have the ability to conceptualize formulate prototype and implement algorithms to capture customer behavior and solve business problems
- Analyze results using statistical methods to make recommendations to improve customer experience and business results
About DSP e-business Division
The e-Business division at DSP is a specialist in-house team that is working to take advantage of the changing internet & mobile landscape in India that’s resulting in a growing preference towards online commerce. We are working to bring-in a refreshing approach to super-simplify investing in Mutual Funds.
Analytics, Automation, Design, and Device-agnosticism & Simplicity are at the heart of our e-business strategy. Our products, IFAXpress , the B2C transaction portal of DSP web , our android app, our iOS app & our investing decision tool has demonstrated what we intend to do going forward.
What is the role's objective?
This role will own setting up analytics around our digital products and creating data & analytics assets which will enable Insights & recommendation for Digital Products Owners, Marketing and Management.
- The opportunity we have is to capture the insights from the huge data that flow from customer visit, interaction with different features in our website and their behavior pattern.
- Expertise in handling Data from Digital analytics platforms & specialized tools
- Real time tracking of visitor actions, setup triggers, segmentation & KPI measurement to enable changes in users/investors journey
- Work closely in partnership with Digital Product Owners and Marketing teams
What skills do you need to possess?
- Proficient in handling digital data – millions of rows – which gets created with actions of users on website/app
- Proficient in any suite of Digital tools and CDP like – Adobe Suite - site catalyst, experience manager, Onmiture, Google 360, Lemnisk, Webengage …
- Experience in Tag implementation, measurement and optimization
- Setting up dashboard to track and measure various digital KPIs
- Exposure to cloud platform like AWS or GCP or big data technologies – HDFS, Kafka
- Good to have domain experience and exposure of creating insight pack for senior management
Job DescriptionWe are looking for applicants who have a demonstrated research background in machine learning, a passion for independent research and technical problem-solving, and a proven ability to develop and implement ideas from research. The candidate will collaborate with researchers and engineers of multiple disciplines within Ideapoke, in particular with researchers in data collection and development teams to develop advanced data analytics solutions. Work with massive amounts of data collected from various sources.
-4 to 5 years of academic or professional experience in Artificial Intelligence and Data Analytics, Machine Learning, Natural Language Processing/Text mining or related field.
-Technical ability and hands on expertise in Python, R, XML parsing, Big Data, NoSQL and SQL