DeepIntent (www.deepintent.com) is a next-generation advertising technology company applying state of the art Artificial Intelligence to improve the way ads are bought and sold globally. As the only DSP offering deeply contextual campaign targeting of individual concepts and their related sentiments, DeepIntent offers advertisers a unique way to discover and dynamically message audiences across both the major exchanges and direct sold inventory.
DeepIntent is pioneering a new era of understanding ad performance by user interests. In addition to higher yields, our publishers receive rich performance information on a per-concept, per-sentiment level, all in real-time and beautifully visualized on our UI.
ITTStar global services is subsidiary unit in Bengaluru with head office in Atlanta, Georgia. We are primarily into data management and data life cycle solutions, which includes machine learning and artificial intelligence. For further info, visit ITTstar.com . As discussed over the call, I am forwarding the job description. We are looking for enthusiastic and experienced data engineers to be part of our bustling team of professionals for our Bengaluru location. JOB DESCRIPTION: 1. Experience in Spark & Big Data is mandatory. 2. Strong Programming Skills in Python / Java / Scala /Node.js. 3. Hands on experience handling multiple data types JSON/XML/Delimited/Unstructured. 4. Hands on experience working at least one Relational and/or NoSQL Databases. 5. Knowledge on SQL Queries and Data Modeling. 6. Hands on experience working in ETL Use cases either in On-premise or Cloud. 7. Experience in any Cloud Platform (AWS, Azure, GCP, Alibaba). 8. Knowledge in one or more AWS Services like Kinesis, EC2, EMR, Hive Integration, Athena, FireHose, Lambda, S3, Glue Crawler, Redshift, RDS is a plus. 9. Good Communication Skills and Self Driven - should be able to deliver the projects with minimum instructions from Client.
Job Description Mandatory: ☞ Deep knowledge in a programming language, ideally Python ☞ Deep knowledge selenium, numpy, openpyxl, beautifulsoup, pandas, and sklearn ☞ Knowledge of at least one web framework, ideally a Python-friendly one e.g. Django, Flask, Pyramid ☞ Ability to extract multiple sources and conduct heavy treatment to a wide array of tables ☞ Familiarity with event-driven and object-oriented programming ☞ Ability to integrate multiple data sources and databases into one system ☞ Understanding of fundamental design principles for scalable applications Preferred: ☞ High proficiency with at least one code repository ☞ Experience leading and managing deadlines and responsibilities for own work You will be expected to: ☞ Understand existing overall architecture and improve it according to the latest standards and best practices, e.g. existing Extract Treat Load scripts ☞ Using best practice such as code reviews, push – pull, agile development methods ☞ Quickly improve overall quality of code with the development of automated testing We are also working on some exciting Machine Learning applications that you can join if interested.
Job Description: Develop and deliver automation software required for building & improving the functionality, reliability, availability, and manageability of applications and cloud platforms Champion and drive the adoption of Infrastructure as Code (IaC) practices and mindset Design, architect, and build self-service, self-healing, synthetic monitoring and alerting platform and tools Automate the development and test automation processes through CI/CD pipeline (Git, Jenkins, SonarQube, Artifactory, Docker containers) Build container hosting-platform using Kubernetes Introduce new cloud technologies, tools & processes to keep innovating in commerce area to drive greater business value. Must Haves: Proficiency in deploying and maintaining Cloud-based infrastructure services (AWS, GCP, Azure – good hands-on experience in at least one of them) Well versed with service-oriented architecture, cloud-based web services architecture, design patterns and frameworks. Good knowledge of cloud related services like compute, storage, network, messaging (Eg SNS, SQS) and automation (Eg. CFT/Terraform). Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience in systems management/automation tools (Puppet/Chef/Ansible, Terraform) Strong Linux System Admin Experience with excellent troubleshooting and problem-solving skills Hands-on experience with languages (Bash/Python/Core Java/Scala) Experience with CI/CD pipeline (Jenkins, Git, Maven etc) Experience integrating solutions in a multi-region environment Self-motivated, learn quickly and deliver results with minimal supervision Experience with Agile/Scrum/DevOps software development methodologies. Nice to have: Experience in setting-up Elastic Logstash Kibana(ELK) stack. Having worked with large-scale data. Experience with Monitoring tools such as Splunk, Nagios, Grafana, DataDog etc Previously experience on working with distributed architectures like Hadoop, Map-reduce etc.
• 6-8 years’ experience in Java-J2EE Development • Extensive technical experience and development expertise in Core Java, J2EE. • Must have worked on Spring and Webservices. • Good Knowledge in Database and NoSQL like Redis. • Possess advanced knowledge of object-oriented design and development (OOA/OOD) and the J2EE framework and data architectures. • Experience working with and applying Design patterns to solve problems. • Hands on experience in Jboss/Wildfly servers. • Should be able to build solution which is easily configurable, deployable and secure in SaaS environment. • Responsible for planning product iterations & releasing iterations on schedule. • Ability to lead and mentor a team. • Should be able to identify, track and mitigate risks to the product.
Work on different POC Experience in Java/J2ee programming and coding. many more ..
We are looking to hire Tech Lead who is experienced in virtualization, cloud, distributed systems to work on next generation software defined compute, storage and networking technologies.. You will be involved in architecting, building infrastructure product requiring knowledge of virtualisation, containers, networking and storage. Technical leadership, code reviews, pair programming, writing production level code along with unit tests will be your daily job .
We are looking for SFDC developers who are ready to work for our client, which is a level 5 organization for Hyd location. Freshers who have undergone certification on Salesforce are also preferred. Note: only immediate joiners are preferred
App Dynamic Certified, docker container, Deployment testing, UNIX and LINUX, Database Testing, java, selenium, jenkins, test NG